Posts

Comments

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-05-07T22:43:58.501Z · LW · GW

(EDIT: See below.) I'm afraid that I am now confused. I'm not clear on what you mean by "these traits", so I don't know what you think I am being confident about. You seem to think I'm arguing that AIs will converge on a safe design and I don't remember saying anything remotely resembling that.

EDIT: I think I figured it out on the second or third attempt. I'm not 100% committed to the proposition that if we make an AI and know how we did so that we can definitely make sure it's fun and friendly, as opposed to fundamentally uncontrollable and unknowable. However it seems virtually certain to me that we will figure out a significant amount about designing AIs to do what we want in the process of developing them. People who subscribe to various "FOOM" theories about AI coming out of nowhere will probably disagree with this as is their right, but I don't find any of those theories plausible.

I also I hope I didn't give the impression that I thought it was meaningfully possible to create a God-like AI without understanding how to make AI. It's conceivable in that such a creation story is not a logical contradiction like a square circle or a colourless green dream sleeping furiously, but that is all. I think it is actually staggeringly unlikely that we will make an AI without either knowing how to make an AI, or knowing how to upload people who can then make an AI and tell use how they did it.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-05-07T11:05:03.245Z · LW · GW

I won't argue against the claim that we could conceivably create an AI without knowing anything about how to create an AI. It's trivially true in the same way that we could conceivably turn a monkey loose on a typewriter and get strong AI.

I also agree with you that if we got an AI that way we'd have no idea how to get it to do any one thing rather than another and no reason to trust it.

I don't currently agree that we could make such an AI using a non-functioning brain model plus "a bit of evolution". I am open to argument on the topic but currently it seems to me that you might as well say "magic" instead of "evolution" and it would be an equivalent claim.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-05-03T03:03:52.664Z · LW · GW

A universal measure for anything is a big demand. Mostly we get by with some sort of somewhat-fuzzy "reasonable person" standard, which obviously we can't fully explicate in neurological terms either yet, but which is much more achievable.

Liberty isn't a one-dimensional quality either, since for example you might have a country with little real freedom of the press but lots of freedom to own guns, or vice versa.

What you would have to do to develop a measure with significant intersubjective validity is to ask a whole bunch of relevantly educated people what things they consider important freedoms and what incentives they would need to be offered to give them up, to figure out how they weight the various freedoms. This is quite do-able, and in fact we do very similar things when we do QALY analysis of medical interventions to find out how much people value a year of life without sight compared to a year of life with sight (or whatever).

Fundamentally it's not different to figuring out people's utility functions, except we are restricting the domain of questioning to liberty issues.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-05-03T00:17:38.366Z · LW · GW

I tend to think that you don't need to adopt any particular position on free will to observe that people in North Korea lack freedom from government intervention in their lives, access to communication and information, a genuine plurality of viable life choices and other objectively identifiable things humans value. We could agree for the sake of argument that "free will is an illusion" (for some definitions of free will and illusion) yet still think that people in New Zealand have more liberty than people in North Korea.

I think that you are basically right that the Framing Problem is like the problem of building a longer bridge, or a faster car, in that you are never going to solve the entire class of problem at a stroke so that you can make infinitely long bridges or infinitely fast cars but that you can make meaningful incremental progress over time. I've said from the start that capturing the human ability to make philosophical judgments about liberty is a hard problem but I don't think it is an impossible one - just a lot easier than creating a program that does that and solves all the other problems of strong AI at once.

In the same way that it turns out to be much easier to make a self-driving car than a strong AI, I think we'll have useful natural-language parsing of terms like "liberty" before we have strong AI.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-05-02T20:37:12.839Z · LW · GW

I said earlier in this thread that we can't do this and that it is a hard problem, but also that I think it's a sub-problem of strong AI and we won't have strong AI until long after we've solved this problem.

I know that Word of Eliezer is that disciples won't find it productive to read philosophy, but what you are talking about here has been discussed by analytic philosophers and computer scientists as "the frame problem" since the eighties and it might be worth a read for you. Fodor argued that there are a class of "informationally unencapsulated" problems where you cannot specify in advance what information is and is not relevant to solving the problem, hence really solving them as opposed to coming up with a semi-reliable heuristic is an incredibly difficult problem for AI. Defining liberty or identifying it in the wild seems like it's an informationally unencapsulated problem in that sense and hence a very hard one, but one which AI has to solve before it can tackle the problems humans tackle.

If I recall correctly Fodor argued in Modules, Frames, Fridgeons, Sleeping Dogs, and the Music of the Spheres that this problem was in fact the heart of the AI problem, but that proposition was loudly raspberried in the literature by computer scientists. You can make up your own mind about that one.

Here's a link to the Stanford Dictionary of Philosophy page on the subject.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-05-02T20:19:32.380Z · LW · GW

I didn't think we needed to put the uploaded philosopher under billions of years of evolutionary pressure. We would put your hypothetical pre-God-like AI in one bin and update it under pressure until it becomes God-like, and then we upload the philosopher separately and use them as a consultant.

(As before I think that the evolutionary landscape is unlikely to allow a smooth upward path from modern primate to God-like AI, but I'm assuming such a path exists for the sake of the argument).

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-05-02T00:49:06.528Z · LW · GW

I think there is insufficient information to answer the question as asked.

If I offer you the choice of a box with $5 in it, or a box with $500 000 in it, and I know that you are close enough to a rational utility-maximiser that you will take the $500 000, then I know what you will choose and I have set up various facts in the world to determine your choice. Yet it does not seem on the face of it as if you are not free.

On the other hand if you are trying to decide between being a plumber or a blogger and I use superhuman AI powers to subtly intervene in your environment to push you into one or the other without your knowledge then I have set up various facts in the world to determine your choice and it does seem like I am impinging on your freedom.

So the answer seems to depend at least on the degree of transparency between A and B in their transactions. Many other factors are almost certainly relevant, but that issue (probably among many) needs to be made clear before the question has a simple answer.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-05-01T11:49:29.865Z · LW · GW

If I was unclear, I was intending that remark to apply to the original hypothetical scenario where we do have a strong AI and are trying to use it to find a critical path to a highly optimal world. In the real world we obviously have no such capability. I will edit my earlier remark for clarity.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-05-01T11:46:59.120Z · LW · GW

The standard LW position (which I think is probably right) is that human brains can be modelled with Turing machines, and if that is so then a Turing machine can in theory do whatever it is we do when we decide that something ls liberty, or pornography.

There is a degree of fuzziness in these words to be sure, but the fact we are having this discussion at all means that we think we understand to some extent what the term means and that we value whatever it is that it refers to. Hence we must in theory be able to get a Turing machine to make the same distinction although it's of course beyond our current computer science or philosophy to do so.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-05-01T00:16:47.826Z · LW · GW

If you can do that, then you can just find someone who you think understands what we mean by "liberty" (ideally someone with a reasonable familiarity with Kant, Mill, Dworkin and other relevant writers), upload their brain without understanding it, and ask the uploaded brain to judge the matter.

(Off-topic: I suspect that you cannot actually get a markedly superhuman AI that way, because the human brain could well be at or near a peak in the evolutionary landscape so that there is no evolutionary pathway from a current human brain to a vastly superhuman brain. Nothing I am aware of in the laws of physics or biology says that there must be any such pathway, and since evolution is purposeless it would be an amazing lucky break if it turned out that we were on the slope of the highest peak there is, and that the peak extends to God-like heights. That would be like if we put evolutionary pressure on a cheetah and discovered that if we do that we can evolve a cheetah that runs at a significant fraction of c.

However I believe my argument still works even if I accept for the sake of argument that we are on such a peak in the evolutionary landscape, and that creating God-like AI is just a matter of running a simulated human brain under evolutionary pressure for a few billion simulated years. If we have that capability then we must also be able to run a simulated philosopher who knows what "liberty" refers to).

EDIT: Downvoting this without explaining why you disagree doesn't help me understand why you disagree.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-05-01T00:07:14.669Z · LW · GW

Why? Just because the problem is less complicated, does not mean it will be solved first. A more complicated problem can be solved before a less complicated problem, especially if there is more known about it.

To clarify, it seems to me that modelling hairyfigment's ability to decide whether people have liberty is not only simpler than modelling hairyfigment's whole brain, but that it is also a subset of that problem. It does seem to me that you have to solve all subsets of Problem B before you can be said to have solved Problem B, hence you have to have solved the liberty-assessing problem if you have solved the strong AI problem, hence it makes no sense to postulate a world where you have a strong AI but can't explain liberty to it.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-05-01T00:02:48.006Z · LW · GW

We have identified the point on which we differ, which is excellent progress. I used fictional worlds as examples, but would it solve the problem if I used North Korea and New Zealand as examples instead, or the world in 1814 and the world in 2014? Those worlds or nations were not created to be transparent to human examination but I believe you do have the faculty to distinguish between them.

I don't see how this is harder than getting an AI to handle any other context-dependant, natural language descriptor, like "cold" or "heavy". "Cold" does not have a single, unitary definition in physics but it is not that hard a problem to figure out when you should say "that drink is cold" or "that pool is cold" or "that liquid hydrogen is cold". Children manage it and they are not vastly superhuman artificial intelligences.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-04-30T12:10:26.062Z · LW · GW

I'll try to lay out my reasoning in clear steps, and perhaps you will be able to tell me where we differ exactly.

  1. Hairyfigment is capable of reading Orwell's 1984, and Banks' Culture novels, and identifying that the people in the hypothetical 1984 world have less liberty than the people in the hypothetical Culture world.
  2. This task does not require the full capabilities of hairyfigment's brain, in fact it requires substantially less.
  3. A program that does A+B has to be more complicated than a program that does A alone, where A and B are two different, significant sets of problems to solve. (EDIT: If these programs are efficiently written)
  4. Given 1-3, a program that can emulate hairyfigment's liberty-distinguishing faculty can be much, much less complicated than a program that can do that plus everything else hairyfigment's brain can do.
  5. If we can simulate a complete human brain that is the same as having solved the strong AI problem.
  6. A program that can do everything hairyfigment's brain can do is a program that simulates a complete human brain.
  7. Given 4-6 it is much less complicated to emulate hairyfigment's liberty-distinguishing faculty than to solve the strong AI problem.
  8. Given 7, it is unreasonable to postulate a world where we have solved the strong AI problem, in spades, so much so we have a vastly superhuman AI, but we still haven't solved the hairyfigment's liberty-distinguishing faculty problem.
Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-04-30T08:07:48.424Z · LW · GW

I really am. I think a human brain could rule out superficially attractive dystopias and also do many, many other things as well. If you think you personally could distinguish between a utopia and a superficially attractive dystopia given enough relevant information (and logically you must think so, because you are using them as different terms) then it must be the case that a subset of your brain can perform that task, because it doesn't take the full capabilities of your brain to carry out that operation.

I think this subtopic is unproductive however, for reasons already stated. I don't think there is any possible world where we cannot achieve a tiny, partial solution to the strong AI problem (codifying "liberty", and similar terms) but we can achieve a full-blown, transcendentally superhuman AI. The first problem is trivial compared to the second. It's not a trivial problem, by any means, it's a very hard problem that I don't see being overcome in the next few decades, but it's trivial compared to the problem of strong AI which is in turn trivial compared to the problem of vastly superhuman AI. I think Stuart_Armstrong is swallowing a whale and then straining at a gnat.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-04-30T06:28:16.107Z · LW · GW

I could be wrong but I believe that this argument relies on an inconsistent assumption, where we assume we have solved the problem of creating an infinitely powerful AI, but we have not solved the problem of operationally defining commonplace English words which hundreds of millions of people successfully understand in such a way that a computer can perform operations using them.

It seems to me that the strong AI problem is many orders of magnitude more difficult than the problem of rigorously defining terms like "liberty". I imagine that a relatively small part of the processing power of one human brain is all that is needed to perform operations on terms like "liberty" or "paternalism" and engage in meaningful use of them so it is a much, much smaller problem than the problem of creating even a single human-level AI, let alone a vastly superhuman AI.

If in our imaginary scenario we can't even define "liberty" in such a way that a computer can use the term, it doesn't seem very likely that we can build any kind of AI at all.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-04-29T21:40:30.648Z · LW · GW

The strong AI problem is much easier to solve than the problem of motivating an AI to respect liberty. For instance, the first one can be brute forced (eg AIXItl with vast resources), the second one can't.

I don't believe that strong AI is going to be as simple to brute force as a lot of LessWrongers believe, personally, but if you can brute force strong AI then you can just get it to run a neuron-by-neuron simulation of the brain of a reasonably intelligent first year philosophy student who understands the concept of liberty and tell the AI not to take actions which the simulated brain thinks offend against liberty.

That is assuming that in this hypothetical future scenario where we have a strong AI we are capable of programming that strong AI to do any one thing instead of another, but if we cannot do that then the entire discussion seems to me to be moot.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-04-29T11:34:15.686Z · LW · GW

I think Asimov did this first with his Multivac stories, although rather than promptly destroy itself Multivac executed a long-term plan to phase itself out.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-04-29T11:29:33.362Z · LW · GW

Precisely and exactly! That's the whole of the problem - optimising for one thing (appearance) results in the loss of other things we value.

This just isn't always so. If you instruct an AI to optimise a car for speed, efficiency and durability but forget to specify that it has to be aerodynamic, you aren't going to get a car shaped like a brick. You can't optimise for speed and efficiency without optimising for aerodynamics too. In the same way it seems highly unlikely to me that you could optimise a society for freedom, education, just distribution of wealth, sexual equality and so on without creating something pretty close to optimal in terms of unwanted pregnancies, crime and other important axes.

Even if it's possible to do this, it seems like something which would require extra work and resources to achieve. A magical genie AI might be able to make you a super-efficient brick-shaped car by using Sufficiently Advanced Technology indistinguishable from magic but even for that genie it would have to be more work than making an equally optimal car by the defined parameters that wasn't a silly shape. In the same way an effectively God-like hypothetical AI might be able to make a siren world that optimised for everything except crime and create a world perfect in every way except that it was rife with crime but it seems like it would be more work, not less.

Next challenge: define liberty in code. This seems extraordinarily difficult.

I think if we can assume we have solved the strong AI problem, we can assume we have solved the much lesser problem of explaining liberty to an AI.

So we do agree that there are problem with an all-powerful genie?

We've got a problem with your assumptions about all-powerful genies, I think, because I think your argument relies on the genie being so ultimately all-powerful that it is exactly as easy for the genie to make an optimal brick-shaped car or an optimal car made out of tissue paper and post-it notes as it is for the genie to make an optimal proper car. I don't think that genie can exist in any remotely plausible universe.

If it's not all-powerful to that extreme then it's still going to be easier for the genie to make a society optimised (or close to it) across all the important axes at once than one optimised across all the ones we think to specify while tanking all the rest. So for any reasonable genie I still think market worlds don't make sense as a concept. Siren worlds, sure. Market worlds, not so much, because the things we value are deeply interconnected and you can't just arbitrarily dump-stat some while efficiently optimising all the rest.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-04-28T21:03:29.487Z · LW · GW

I think this and the "finite resources therefore tradeoffs" argument both fail to take seriously the interconnectedness of the optimisation axes which we as humans care about.

They assume that every possible aspect of society is an independent slider which a sufficiently advanced AI can position at will, even though this society is still going to be made up of humans, will have to be brought about by or with the cooperation of humans and will take time to bring about. These all place constraints on what is possible because the laws of physics and human nature aren't infinitely malleable.

I don't think discreet but total control over a world is compatible with things like liberty, which seem like obvious qualities to specify in an optimal world we are building an AI to search for.

I think what we might be running in to here is less of an AI problem and more of a problem with the model of AI as an all-powerful genie capable of absolutely anything with no constraints whatsoever.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-04-28T11:21:16.735Z · LW · GW

I don't think you have highlighted a fundamental problem since we can just specify that we mean a low percentage of conceptions being deliberately aborted in liberal societies where birth control and abortion are freely available to all at will.

My point, though, is that I don't think it is very plausible that "marketing worlds" will organically arise where there are no humans, or no conception, but which tick all the other boxes we might think to specify in our attempts to describe an ideal world. I don't see how there being no conception or no humans could possibly be a necessary trade-off with things like wealth, liberty, rationality, sustainability, education, happiness, the satisfaction of rational and well-informed preferences and so forth.

Of course a sufficiently God-like malevolent AI could presumably find some way of gaming any finite list we give it, since there are probably an unbounded number of ways of bringing about horrible worlds, so this isn't a problem with the idea of siren worlds. I just don't find the idea of market worlds very plausible because so many of the things we value are fundamentally interconnected.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-04-28T03:07:38.127Z · LW · GW

It's a proposition with a truth value in a sense, but if we are disagreeing about the topic then it seems most likely that the term "one of the world's foremost intellectuals" is ambiguous enough that elucidating what we mean by the term is necessary before we can worry about the truth value.

Obviously I think that the truth value is false, and so obviously so that it needs little further argument to establish the implied claim that it is rational to think that calling Eliezer "one of the world's foremost intellectuals" is cult-like and that is is rational to place a low value on a rationalist forum if it is cult-like.

So the question is how you are defining "one of the world's foremost intellectuals"? I tend to define it as a very small group of very elite thinkers, typically people in their fifties or later with outstanding careers who have made major contributions to human knowledge or ethics.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-04-28T00:48:43.615Z · LW · GW

In a world where Eliezer is by objective standards X, then in that world it is correct to say he is X, for any X. That X could be "one of the world's foremost intellectuals" or "a moose" and the argument still stands.

To establish whether it is objectively true that "his basic world view is fundamentally correct in important ways where the mainstream of intellectuals are wrong" would be beyond the scope of the thread, I think, but I think the mainstream has good grounds to question both those sub-claims. Worrying about steep-curve AI development might well be fundamentally incorrect as opposed to fundamentally correct, for example, and if the former is true then Eliezer is fundamentally wrong. You might also be wrong about what mainstream intellectuals think. For example the bitter struggle between frequentism and Bayesianism is almost totally imaginary, so endorsing Bayesianism is not going against the mainstream.

Perhaps more fundamentally, literally anything published in the applied analytic philosophy literature is just as much at the cutting edge of current intellectual discourse as Yudkowsky's work. So your proposed definition fails to pick him out as being special, unless every published applied analytic philosopher is also one of the world's foremost intellectuals.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-04-28T00:13:45.393Z · LW · GW

It seems based on your later comments that the premise of marketing worlds existing relies on there being trade-offs between our specified wants and our unspecified wants, so that the world optimised for our specified wants must necessarily be highly likely to be lacking in our unspecified ones ("A world with maximal bananas will likely have no apples at all").

I don't think this is necessarily the case. If I only specify that I want low rates of abortion, for example, then I think it highly likely that 'd get a world that also has low rates of STD transmission, unwanted pregnancy, poverty, sexism and religiousity because they all go together, I think you could specify any one of those variables and almost all of the time you would get all the rest as a package deal without specifying them.

Of course a malevolent AI could probably deliberately construct a siren world to maximise one of those values and tank the rest but such worlds seem highly unlikely to arise organically. The rising tide of education, enlightenment, wealth and egalitarianism lifts most of the important boats all at once, or at least that is how it seems to me.

Comment by PhilosophyTutor on Siren worlds and the perils of over-optimised search · 2014-04-28T00:01:35.928Z · LW · GW

Calling Eliezer Yudkowsky one of the world's foremost intellects is the kind of cult-like behaviour that gives LW a bad reputation in some rationalist circles. He's one of the foremost Harry Potter fanfiction authors and a prolific blogger, who has also authored a very few minor papers. He's a smart guy but there are a lot of smart guys in the world.

He articulates very important ideas, but so do very many teachers of economics, ethics, philosophy and so on. That does not make them very important people (although the halo effect makes some students think so).

(Edited to spell Eliezer's name correctly, with thanks for the correction).

Comment by PhilosophyTutor on The Landmark Forum — a rationalist's first impression · 2012-10-31T07:19:46.796Z · LW · GW

"Cult" might not be a very useful term given the existing LW knowledge base, but it's a very useful term. I personally recommend Steve Hassan's book "Combating Cult Mind Control" as an excellent introduction to how some of the nastiest memetic viruses propagate and what little we can do about them.

He lists a lengthy set of characteristics which cults tend to have in common which go beyond the mind-controlling tactics of mainstream religions. My fuzzy recollection is that est/Landmark was considered a cult by the people who make it their area of interest to keep track of currently active cults.

In a sense these organisations are the polar opposite of LW. LW attempts to maximise rationality, although not always successfully, and cults attempt to create maximum dependence and control.

Comment by PhilosophyTutor on Rationality Quotes September 2012 · 2012-09-26T14:22:46.549Z · LW · GW

A possible interpretation is that the "strength" of a belief reflects the importance one attaches to acting upon that belief. Two people might both believe with 99% confidence that a new nuclear power plant is a bad idea, yet one of the two might go to a protest about the power plant and the other might not, and you might try to express what is going on there by saying that one holds that belief strongly and the other weakly.

You could of course also try to express it in terms of the two people's confidence in related propositions like "protests are effective" or "I am the sort of person who goes to protests". In that case strength would be referring to the existence or nonexistence of related beliefs which together are likely to be action-driving.

Comment by PhilosophyTutor on Real World Solutions to Prisoners' Dilemmas · 2012-07-11T04:19:25.211Z · LW · GW

It seems from my perspective that we are talking past each other and that your responses are no longer tracking the original point. I don't personally think that deserves upvotes, but others obviously differ.

Your original claim was that:

Said literature gives advice, reasoning and conclusions that is epistemically, instrumentally and normatively bad.

Now given that game theory is not making any normative claims, it can't be saying things which are normatively bad. Similarly since game theory does not say that you should either go out and act like a game-theory-rational agent or that you should act as if others will do so, it can't be saying anything instrumentally bad either.

I just don't see how it could even be possible for game theory to do what you claim it does. That would be like stating that a document describing the rules of poker was instrumentally and normatively bad because it encouraged wasteful, zero-sum gaming. It would be mistaking description for prescription.

We have already agreed, I think, that there is nothing epistemically bad about game theory taken as it is.

Everything below responds to the off-track discussion above and can be safely ignored by posters not specifically interested in that digression.

In game theory each player's payoff matrix is their own. Notice that Codependent Romeo does not care where Codependent Juliet ends up in her payoff matrix. If Codependent Romeo was altruistic in the sense of wanting to maximise Juliet's satisfaction with her payoff, he'd be keeping silent. Because Codependent Romeo is game-theory-rational, he's indifferent to Codependent Juliet's satisfaction with her outcome and only cares about maximising his personal payoff.

The standard assumption in a game-theoretic analysis is that a poker player wants money, a chess player wants to win chess games and so on, and that they are indifferent to their opponent(s) opinion about the outcome, just as Codependent Romeo is maximising his own payoff matrix and is indifferent to Codependent Juliet's.

That is what we attempt to convey when we tell people that game-theory-rational players are neither benevolent nor malevolent. Even if you incorporate something you want to call "altruism" into their preference order, they still don't care directly about where anyone else ends up in those other peoples' preference orders.

Comment by PhilosophyTutor on Real World Solutions to Prisoners' Dilemmas · 2012-07-09T11:47:23.547Z · LW · GW

This isn't about the agents having selfish desires (in fact, they don't even have to "not care at all about other entities"---altruism determines what the utility function is, not how to maximise it.)

This is wrong. The standard assumption is that game-theory-rational entities are neither altruistic nor malevolent. Otherwise the Prisoner's Dilemma wouldn't be a dilemma in game theory. It's only a dilemma as long as both players are solely interested in their own outcomes. As soon as you allow players to have altruistic interests in other players' outcomes it ceases to be a dilemma.

You can do similar mathematical analyses with altruistic agents, but at that point speaking strictly you're doing decision-theoretic calculations or possibly utilitarian calculations not game-theoretic calculations.

Utilitarian ethics, game theory and decision theory are three different things, and it seems to me your criticism assumes that statements about game theory should be taken as statements about utilitarian ethics or statements about decision theory. I think that is an instance of the fallacy of composition and we're better served to stay very aware of the distinctions between those three frameworks.

Comment by PhilosophyTutor on Real World Solutions to Prisoners' Dilemmas · 2012-07-09T08:15:26.973Z · LW · GW

I would be interested in reading about the bases for your disagreement. Game theory is essentially the exploration of what happens if you postulate entities who are perfectly informed, personal utility-maximisers who do not care at all either way about other entities. There's no explicit or implicit claim that people ought to behave like those entities, thus no normative content whatsoever. So I can't see how the game theory literature could be said to give normatively bad advice, unless the speaker misunderstood the definition of rationality being used, and thought that some definition of rationality was being used in which rationality is normative.

I'm not sure what negative epistemic or instrumental outcomes you foresee either, but I'm open to the possibility that there are some.

Is there a term you prefer to "game-theory-rational" that captures the same meaning? As stated above, game theory is the exploration of what happens when entities that are "rational" by that specific definition interact with the world or each other, so it seems like the ideal term to me.

Comment by PhilosophyTutor on Real World Solutions to Prisoners' Dilemmas · 2012-07-09T06:41:28.206Z · LW · GW

Said literature gives advice, reasoning and conclusions that is epistemically, instrumentally and normatively bad.

Said literature makes statements about what is game-theory-rational. Those statements are only epistemically, instrumentally or normatively bad if you take them to be statements about what is LW-rational or "rational" in the layperson's sense.

Ideally we'd use different terms for game-theory-rational and LW-rational, but in the meantime we just need to keep the distinction clear in our heads so that we don't accidentally equivocate between the two.

Comment by PhilosophyTutor on Why Academic Papers Are A Terrible Discussion Forum · 2012-06-28T00:26:14.565Z · LW · GW

The effort required may be much larger than you think. Eliezer finds it very difficult to do that kind of work, for example. (Which is why his papers still read like long blog posts, and include very few citations. CEV even contains zero citations, despite re-treading ground that has been discussed by philosophers for centuries, as "The Singularity and Machine Ethics" shows.)

If this is the case, then a significant benefit to Eliezer of trying to get papers published would be that it would be excellent discipline for Eliezer, and would make him an even better scholar.

A benefit that would follow on is that it would establish by example that nobody is above showing their work, acknowledging their debts and being current on the relevant literature. Conceivably Eliezer is such a talented guy that it is of no benefit to him to do these things, but if everyone who thought they were that talented were excused from showing their work and keeping current then progress would slow significantly.

It also avoids reinventing the wheel. No matter how smart Eliezer is, it's always conceivable that someone else thought of something first and expressed it in rigorous detail with proper citations. A proper literature review avoids this waste of valuable research time.

Comment by PhilosophyTutor on Med Patient Social Networks Are Better Scientific Institutions · 2012-05-14T01:35:58.439Z · LW · GW

I think you're probably right in general, but I wouldn't discount the possibility that, for example, a rumour could get around the ALS community that lithium was bad, and be believed by enough people for the lack of blinding to have an effect. There was plenty of paranoia in the gay community about AZT, for example, despite the fact that they had a real and life-threatening disease, so it just doesn't always follow that people with real and life-threatening diseases are universally reliable as personal judges of effective interventions.

Similarly if the wi-fi "allergy" crowd claimed that anti-allergy meds from a big, evil pharmaceutical company did not help them that could be a finding that would hold up to blinding but then again it might not.

I do worry that some naive Bayesians take personal anecdotes to be evidence far too quickly, without properly thinking through the odds that they would hear such anecdotes in worlds where the anecdotes were false. People are such terrible judges of medical effectiveness that in many cases I don't think the odds get far off 50% either way.

Comment by PhilosophyTutor on Med Patient Social Networks Are Better Scientific Institutions · 2012-05-13T21:02:39.735Z · LW · GW

What is your evidence for the claim that the main thing powering the superior statistical strength of PatientsLikeMe is the fact that medical researchers have learned to game the system and use complicated ad-hoc frequentist statistics to get whatever answer they want or think they ought to get? What observations have you made that are more likely to be true given that hypothesis?

Comment by PhilosophyTutor on Med Patient Social Networks Are Better Scientific Institutions · 2012-05-13T20:45:15.140Z · LW · GW

No. Lack of double-blinding will increase the false negative rate too, if the patients, doctors or examiners think that something shouldn't work or should be actively harmful. If you test a bunch of people who believe that aspartame gives them headaches or that wifi gives them nausea without blinding them you'll get garbage out as surely as if you test homeopathic remedies unblinded on a bunch of people who think homeopathic remedies cure all ills.

In this particular case I think it's likely the system worked because it's relatively hard to kid yourself about progressing ALS symptoms, and even with a hole in the blinding sometimes more data is just better. This is about as easy as medical problems get.

Generalising from this to the management of chronic problems seems like a major mistake. There's far, far more scope to fool oneself with placebo effects, wishful thinking, failure to compensate for regression to the mean, attachment to a hypothesis and other cognitive errors with a chronic problem.

Comment by PhilosophyTutor on Torture vs. Dust Specks · 2012-05-11T06:25:56.057Z · LW · GW

If this is a problem for Rawls, then Bentham has exactly the same problem given that you can hypothesise the existence of a gizmo that creates 3^^^3 units of positive utility which is hidden in a different part of the multiverse. Or for that matter a gizmo which will inflict 3^^^3 dust specks on the eyes of the multiverse if we don't find it and stop it. Tell me that you think that's an unlikely hypothesis and I'll just raise the relevant utility or disutility to the power of 3^^^3 again as often as it takes to overcome the degree of improbability you place on the hypothesis.

However I think it takes a mischievous reading of Rawls to make this a problem. Given that the risk of the trans-multiverse travel project being hopeless (as you stipulate) is substantial and these hypothetical choosers are meant to be risk-averse, not altruistic, I think you could consistently argue that the genuinely risk-averse choice is not to pursue the project since they don't know this worse-off person exists nor that they could do anything about it if that person did exist.

That said, diachronous (cross-time) moral obligations are a very deep philosophical problem. Given that the number of potential future people is unboundedly large, and those people are at least potentially very badly off, if you try to use moral philosophies developed to handle current-time problems and apply them to far-future diachronous problems it's very hard to avoid the conclusion that we should dedicate 100% of the world's surplus resources and all our free time to doing all sorts of strange and potentially contradictory things to benefit far-future people or protect them from possible harms.

This isn't a problem that Bentham's hedonistic utilitarianism, nor Eliezer's gloss on it, handles any more satisfactorily than any other theory as far as I can tell.

Comment by PhilosophyTutor on Rationality Quotes May 2012 · 2012-05-11T06:08:30.505Z · LW · GW

It already is in Bayesian language, really, but to make it more explicit you could rephrase it as "Unless P(B|A) is 1, there's always some possibility that hypothesis A is true but you don't get to see observation B."

Comment by PhilosophyTutor on Torture vs. Dust Specks · 2012-05-10T23:49:58.956Z · LW · GW

This "moral dilemma" only has force if you accept strict Bentham-style utilitarianism, which treats all benefits and harms as vectors on a one-dimensional line, and cares about nothing except the net total of benefits and harms. That was the state of the art of moral philosophy in the year 1800, but it's 2012 now.

There are published moral philosophies which handle the speck/torture scenario without undue problems. For example if you accepted Rawls-style, risk-averse choice from a position where you are unaware whether you will be one of the speck-victims or the torture victim, you would immediately choose the specks. Choosing the specks maximises the welfare of the least well off (they are subject to a speck, not torture) and, if you don't know which role you will play, eliminates the risk you might be the torture victim.

(Bentham-style utility calculations are completely risk-neutral and care only about expected return on investment. However nothing about the universe I'm aware of requires you to be this way, as opposed to being risk-averse).

Or for that matter if you held a modified version of utilitarianism that subscribed to some notion of "justice" or "what people deserve", and cared about how utility was distributed between persons instead of being solely concerned with the strict mathematical sum of all utility and disutility, you could just say that you don't care how many dust specks you pile up, the degree of unfairness in a distribution where 3^^^3 people get out of a dust speck and one person gets tortured makes the torture scenario a less preferable distribution.

I know Eliezer's on record as advising people not to read philosophy, but I think this is a case where that advice is misguided.

Comment by PhilosophyTutor on Rationality Quotes May 2012 · 2012-05-10T03:37:32.565Z · LW · GW

If you have a result with a p value of p<0.05, the universe could be kidding you up to 5% of the time. You can reduce the probability that the universe is kidding you with bigger samples, but you never get it to 0%.

Comment by PhilosophyTutor on HPMOR: What could've been done better? · 2012-04-18T00:42:59.200Z · LW · GW

We probably shouldn't leap to the assumption that Transfiguration Weekly is a peer-reviewed journal with a large staff publishing results from multiple large laboratories. For all we know it's churned out in a basement by an amateur enthusiast, is only eight pages long on a good week and mostly consists of photographs of people's cats transfigured into household objects.

Comment by PhilosophyTutor on Beautiful Probability · 2012-03-16T04:46:40.263Z · LW · GW

In the real world, Eliezer's example simply doesn't work.

In the real world you only hear about the results when they are published. The prior probability of the biased researcher publishing a positive result is higher than the prior probability of the unbiased researcher publishing a positive result.

The example only works if you are an omniscient spy who spies on absolutely all treatments. It's true that an omniscient spy should just collate all the data regardless of the motivations of the researcher spied upon. However unless you are an omniscient spy you do need to take into account how an agent showing you new data went about gathering the data that they are showing you. Otherwise you are vulnerable to being "mushroomed" and updating yourself into beliefs that will cause you to lose.

Comment by PhilosophyTutor on Extreme Rationality: It's Not That Great · 2012-01-24T09:56:50.831Z · LW · GW

That seems to be a poorly-chosen prior.

An obvious improvement would be to instead use "non-rationalists are dedicated to achieving a goal through training and practice, and find a system for doing so which is significantly superior to alternative, existing systems".

It is no great praise of an exercise regime, for example, to say that those who follow it get fitter. The interesting question is whether that particular regime is better or worse than alternative exercise regimes.

However the problem with that question is that there are multiple competing strands of seduction theory, which is why any critic can be accused of attacking a straw man regardless of the points they make. So you need to specify multiple sub-questions of the form "Group A of non-rationalists were dedicated to achieving a goal through training and practice, and found a system for doing so which is significantly superior to alternative, existing systems", "Group B of non-rationalists..." and so on for as many sub-types of seduction doctrine as you are prepared to acknowledge, where the truth of some groups' doctrines precludes the truth of some other groups' doctrines. As musical rationalists Dire Straits pointed out, if two guys say they're Jesus then at least one of them must be wrong.

So then ideally we ask all of these people what evidence led them to fix the belief they hold that the methods of their group perform better than alternative, existing ways of improving your attractiveness. That way we could figure out which if any of them are right, or whether they are all wrong.

However I don't seem to be able to get to that point. Since you position yourself as outside the seduction community and hence immune to requests for evidence, but as thoroughly informed about the seduction community and hence entitled to pass judgment on whether my comments are directed at straw men, there's no way to explore the interesting question by engaging with you.

Edit to add: I see one of the ancestor posts has been pushed down to -3, the point at which general traffic will no longer see later posts. Based on previous experience I predict that N accounts who downvote or upvote all available posts along partisan lines will hit this subthread pushing all of wedrifid's posts up by +N and all of my posts down by -N.

Comment by PhilosophyTutor on Extreme Rationality: It's Not That Great · 2012-01-24T09:02:29.419Z · LW · GW

On lesswrong insisting a claim is unfalsifiable while simultaneously explaining how that claim can be falsified is more than sufficient cause to downvote.

That's rather sad, if the community here thinks that the word "unfalsifiable" only refers to beliefs which are unfalsifiable in principle from the perspective of a competent rationalist, and that the word is not also used to refer to belief systems held by irrational people which are unfalsifiable from the insider/irrational perspective.

The fundamental epistemological sin is the same in each case, since both categories of belief are irrational in the sense that there is no good reason to favour the particular beliefs held over the unbounded number of other, equally unfalsifiable beliefs which explain the data equally well.

That said, I do find it curious that such misunderstandings seem to exclusively crop up in those posts where I criticise the beliefs of the seduction community. Those posts get massively downvoted compared to posts I make on any other topic, and from my insider perspective there is no difference in quality of posting.

consistent use of straw men and the insulting misrepresentation of a group of people you are opposed to.

There's a philosophical joke that goes like this:

"Zabludowski has insinuated that my thesis that p is false, on the basis of alleged counterexamples. But these so- called "counterexamples" depend on construing my thesis that p in a way that it was obviously not intended -- for I intended my thesis to have no counterexamples. Therefore p".

Source

It's not clear to me at all that I have used straw men or misrepresented a group, and from my perspective it seems that it's impossible to criticise any aspect of the seduction community or its beliefs without being accused of attacking a straw man.

They should not be persuasive and are not intended as such. Instead, in this case, it was an explicit rejection of the "My side is the default position and the burden of proof is on the other!" debating tactic. The subject of how to think correctly (vs debate effectively) is one of greater interest to me than seduction.

Perhaps we should drop this subtopic then, since it seems solely to be about your views of what you see as a particular debating tactic, and get back to the issue of what exactly the evidence is for the beliefs of the seduction community.

If we can agree that how to think correctly is the more interesting topic, then possibly we can agree to explore whether or not the seduction community are thinking correctly by means of examining their evidence.

Comment by PhilosophyTutor on Extreme Rationality: It's Not That Great · 2012-01-24T08:12:49.312Z · LW · GW

It is dramatically different thing to say "people who are in the seduction community are the kind of people who would make up excuses if their claims were falsified" than to say "the beliefs of those in the seduction community are unfalsifiable". While I may disagree mildly with the former claim the latter I object to as an absurd straw man.

I'm content to use the term "unfalsifiable" to refer to the beliefs of homeopaths, for example, even though by conventional scientific standards their beliefs are both falsifiable and falsified. Homeopaths have a belief system in which their practices cannot be shown to not work, hence their beliefs are unfalsifiable in the sense that no evidence you can find will ever make them let go of their belief. The seduction community have a well-developed set of excuses for why their recollections count as evidence for their beliefs (even though they probably shouldn't count as evidence for their beliefs), and for why nothing counts as evidence against their beliefs.

I reject the skeptic role of thrusting the burden of proof around, implying "You've got to prove it to me or it ain't so!' That's just the opposite stupidity to that of a true believer. It is a higher status role within intellectual communities but it is by no means rational.

It is not the opposite of stupidity at all to see a person professing belief Y, and say to them "Please tell me the facts which led you to fix your belief in Y". If their belief is rational then they will be able to tell you those facts, and barring significantly differing priors you too will then believe in Y.

I suspect we differ in our priors when it comes to the proposition that the rituals of the seduction community perform better than comparable efforts to improve one's attractiveness and social skills that are not informed by seduction community doctrine, but not so much that I would withhold agreement if some proper evidence was forthcoming.

However if the local seduction community members instead respond with defensive accusations, downvotes and so forth but never get around to stating the facts which led them to fix their belief in Y then observers should update their own beliefs to increase the probability that the beliefs of the seduction community do not have rational bases.

Unless they are teachers, people are not responsible for forcing correct epistemic states upon others. They are responsible for their beliefs, you are responsible for yours.

Can you see that from my perspective, responses which consist of excuses as to why supporters of the seduction community doctrine(s) should not be expected to state the facts which inform their beliefs are not persuasive? If they have a rational basis for their belief they can just state it. I struggle to envisage probable scenarios where they have such rational bases but rather than simply state them they instead offer various excuses as to why, if they had such evidence, they should not be expected to share it.

Comment by PhilosophyTutor on Extreme Rationality: It's Not That Great · 2012-01-24T07:06:54.630Z · LW · GW

Are you familiar with the technical meaning of 'unfalsifiable'? It does not mean 'have not done scientific tests'. It means 'cannot do scientific tests even in principle'. I would like it if scientists did do more study of this subject but that is not relevant to whether claims are falsifiable.

In the case of Sagan's Dragon, the dragon is unfalsifiable because there is always a way for the believer to explain away every possible experimental result.

My view is that the mythology of the seduction community functions similarly. You can't attack their theories because they can respond by saying that the theory is merely a trick to elicit specific behaviour. You can't attack their claims that specific behaviours are effective because they will say that there is proof, but it only exists in their personal recollections so you have to take their word for it. You can't attack their attitudes, assumptions or claims because they can respond by pointing at one guru or another and saying that particular guru does not share the attitude, assumption or claim you are critiquing.

Their claim could theoretically be falsified, for example by a controlled test with a large sample size which showed that persons who had spent N hours studying and practicing seduction community doctrine/rituals (for some value of N which the seduction community members were prepared to agree was sufficient to show an effect) were no more likely to obtain sex than persons who had spent N hours on things like grooming, socialising with women without using seduction community rituals, reading interesting books they could talk about, taking dancing lessons and whatnot. I suspect but cannot prove though that if we conducted such a test those people who have made the seduction community a large part of their life would find some way to explain the result away, just as the believer in Sagan's dragon comes up with ways to explain away results that would falsify their dragon.

Of course it's not the skeptic's job to falsify the claims of the seduction community. Members of that community very clearly have a large number of beliefs about how best to obtain sex, even if those beliefs are not totally homogenous within that community, and it's their job to present the evidence that led them to the belief that their methods are effective. If it turns out that they have not controlled for the relevant cognitive biases including but not limited to the experimenter effect, the placebo effect, the sunk costs fallacy, the halo effect and correlation not proving causation then it's not rational to attach any real weight to their unsupported recollection as evidence.

Comment by PhilosophyTutor on Extreme Rationality: It's Not That Great · 2012-01-24T05:15:13.230Z · LW · GW

This is an absurd claim. Most of the claims can be presented in the form "If I do X I can expect to on average achieve a better outcome with women than if I do Y". Such claims are falsifiable. Some of them are even actually falsified. They call it "Field Testing".

If they conducted tests of X versus Y with large sample sizes and with blinded observers scoring the tests then they might have a basis to say "I know that if I do X I can expect to on average achieve a better outcome with women than if I do Y". They don't do such tests though.

They especially don't do such tests where X is browsing seduction community sites and trying the techniques they recommend and Y is putting an equal amount of time and effort into personal grooming and socialising with women without using seduction community techniques.

Scientific methodology isn't just a good idea, it's the law. If you don't set up your tests correctly you have weak or meaningless evidence.

Your depiction of the seduction community is a ridiculous straw man and could legitimately be labelled offensive by members of the community that you are so set on disparaging. Mind you they probably wouldn't bother doing so: The usual recommended way to handle such shaming attempts is to completely ignore them and proceed to go get laid anyway.

Or as the Bible says, "But if any place refuses to welcome you or listen to you, shake its dust from your feet as you leave to show that you have abandoned those people to their fate". It's good advice for door-to-door salespersons, Jehova's Witnesses and similar people in the business of selling. If you run into a tough customer don't waste your time trying to convince them, just walk away and look for an easier mark.

However in science that's not how you do things. In science if someone disputes your claim you show them the evidence that led you to fix your claim in the first place.

Are you sure you meant to describe my post as a "shaming attempt"? As pejoratives go this seems like an ill-chosen one, since my critique was strictly epistemological. It seems at least possible that you are posting a standard talking point which is deployed by seduction community members to dismiss ethical critiques, but which makes no sense in response to an epistemological critique.

(There are certainly concerns to be raised about the ethics of the seduction community, but that would be a different post).

Comment by PhilosophyTutor on Extreme Rationality: It's Not That Great · 2012-01-24T04:04:23.981Z · LW · GW

I would say that it is largely the ostensible basis of the seduction community.

As you can see if you read this subthread, they've got a mythology going on that renders most of their claims unfalsifiable. If their theories are unsupported it doesn't matter, because they can disclaim the theories as just being a psychological trick to get you to take "correct" actions. However they've got no rigorous evidence that their "correct" actions actually lead to any more mating success than spending an equivalent amount of time on personal grooming and talking to women without using any seduction-community rituals. They also have such a wide variety of conflicting doctrines and gurus that they can dismiss almost any critique as being based on ignorance, because they can always point to something written somewhere which will contradict any attempt to characterise the seduction community - not that this ever stops them making claims about the community themselves.

They'll claim that they develop such evidence by going out and picking up women, but since they don't do any controlled tests this cannot even in theory produce evidence that the techniques they advocate change their success rate, and even if they did conduct controlled studies their sample sizes are tiny given the claimed success rates. I believe one "guru" claims to obtain sex in one out of thirty-three approaches. I do not believe that anyone's intuitive grasp of statistics is so refined that they can spot variations in such an infrequent outcome and determine whether a given technique increases or decreases that success rate. To do science on such a phenomenon would take a very big sample size. Ergo anyone claiming to have scientific evidence without having done a study with a very big sample size is a fool or a knave.

The mythology of the seduction community is highly splintered and constantly changes over time, which increases the subjective likelihood that we are looking at folklore and scams rather than any kind of semi-scientific process homing in on the truth.

It's also easy to see how it could be very appealing to lonely nerds to think that they could download a walkthrough for getting women into bed the way they can download a walkthrough for Mass Effect or Skyrim. It's an empowering fantasy, to be sure.

If that's what it takes to get them to groom themselves and go talk to women it might even work in an indirect, placebo-like way. So if you prioritise getting laid over knowing the scientific truth about the universe it might be rational to be selectively irrational about seduction folklore. However if you want to know the truth about the universe there's not much to be gained from the seduction community. If they are doing better than chance it's because a stopped clock is right twice a day.

My own view is that the entire project is utterly misguided. Instead of hunting for probably-imaginary increases in their per-random-stranger success at getting sex they should focus on effectively searching the space of potential mates for those who are compatible with them and would be interested in them.

Comment by PhilosophyTutor on Undiscriminating Skepticism · 2012-01-23T01:03:08.002Z · LW · GW

That is indeed a valid argument-form, in basic classical logic. To illustrate this we can just change the labels to ones less likely to cause confusion:

  1. Person X is a Foffler with respect to Y.
  2. Things said about Y by persons who are Fofflers with respect to Y are Snarfly.
  3. Person X said Z about Y.
  4. Z is Snarfly.

The problem arises when instead of sticking a label on the set like "Snarfly" or "bulbous" or whatever you use a label such as "likely to be correct", and people start trying to pull meaning out of that label and apply it to the argument they've just heard. Classical logic, barring specific elaborations, just doesn't let you do that. Classical logic just wants you to treat each label as a meaningless and potentially interchangeable label.

In classical logic if you make up a set called "statements which are likely to be correct" then a statement is either a member of that set or it isn't. (Barring paradoxical scenarios). If it's a member of that set then it is always and forever a member of that set no matter what happens, and if it's not a member then it is always and forever not a member. This is totally counterintuitive because that label makes you want to think that objects should be able to pop in and out of that set as the evidence changes. This is why you have to be incredibly careful in parsing classical-logic arguments that use such labels because it's very easy to get confused about what is actually being claimed.

What's actually being claimed by that argument in classical logical terms is "Z is 'likely to be correct', and Z always will be 'likely to be correct', and this is an eternal feature of the universe". The argument for that conclusion is indeed valid, but once the conclusion is properly explicated it immediately becomes patently obvious that the second premise isn't true and hence the argument is unsound.

Where the parent is simply mistaken in my view is in presenting the above as an instance of the argument from authority. It's not, simply because the argument from authority as it's usually construed contains the second premise only in implicit form and reaches a more definite conclusion. The argument from authority in the sense that it's usually referred to just goes:

  1. Person X has reputation for being an expert on Y.
  2. Person X said Z about Y.
  3. Z is true.

That is indeed an invalid argument.

You can turn it in to a valid argument by adding something like:

2a. Everything Person X says about Y is true.

...but then it wouldn't be the canonical appeal to authority any more.

Comment by PhilosophyTutor on Undiscriminating Skepticism · 2012-01-23T00:12:58.839Z · LW · GW

Here's a link:

http://rationalwiki.org/wiki/Astrology

In brief, there is no evidence from properly conducted trials that astrology can predict future events at a rate better than chance. In addition physics as we currently understand it precludes any possible effect on us from objects so far away.

Astrology can appear to work through a variety of cognitive biases or can be made to appear to work through various forms of trickery. For example when someone is majorly freaked out by the accuracy of a guess (and with a large enough population reading a guess it's bound to be accurate for some of them) that is much more memorable and much more likely to be shared with others than times when the prediction is obviously wrong. As such the availability heuristic might make you think that such instances are far more common than they actually are, while the actual frequency is entirely explicable by chance alone.

Comment by PhilosophyTutor on Welcome to Less Wrong! (2012) · 2012-01-03T01:56:07.577Z · LW · GW

It's sociopaths who should all be killed or otherwise removed from society.

Lots of sociopaths as the term is clinically defined live perfectly productive lives, often in high-stimulation, high-risk jobs that neurotypical people don't want to do like small aircraft piloting, serving in the special forces of their local military and so on. They don't learn well from bad experiences and they need a lot of stimulation to get a high, so those sorts of roles are ideal for them.

They don't need to be killed or removed from society, they need to be channelled into jobs where they can have fun and where their psychological resilience is an asset.

Comment by PhilosophyTutor on Tell Your Rationalist Origin Story · 2012-01-02T04:24:49.185Z · LW · GW

If there was a real guy called Jesus of Nazareth around the early 1st century, who was crucified during Pontius Pilate, and his disciples and followers that formed the core of the religious movement later called Christianity, to argue that Jesus was nonetheless "completely fictional" becomes a mere twisting of words that miscommunicates its intent.

Isn't that just what I said? I contrasted such a Jesus-figure with one who did not do those things, and said that the Jesus-figure you describe would count as a historical Jesus and one that did not do those things would not.

At this point no matter how much evidence appear for a historical Jesus, you can argue that he's fictional because he doesn't match well enough the story of the Bible.

When I start doing that then you can legitimately criticise me for it. Until then you are blaming me for something I haven't done yet.

I'm getting tired, and this is becoming ludicrous. You're not telling me why these things were important, if they weren't real.

There could be many reasons, but the most obvious possibility is that Paul (or whoever) made up a story with those elements, and those who came afterwards had to work within that framework to maintain suspension of disbelief. If you've been proclaiming on street corners for years that you are followers of "Jesus of Nazareth" it could well be hard to suddenly rebrand yourself as followers of "Jesus of Bethlehem" when you figured out you'd have broader appeal if you claimed your messiah was the foretold Jewish messiah. They might wish with hindsight that they'd said he'd been born somewhere else to different parents with a different name, but you can't change your whole brand identity overnight. That doesn't mean the story is true, it just means that the person who made it up didn't perfectly foresee the later opportunities to piggyback on other myths.

If you think about it, the argument that they must have had to keep those elements because they were real doesn't actually make any sense. From the late first century onwards neither the people making up the Christian mythology nor their audience would have had any means to check whether those elements were factual or not. There would have been constraints on their ability to change their story, but historicity would not have been one of those constraints.

I'm not a Christian, I'm an atheist. That doesn't mean I have to ignore what the evidence tells me.

I'm still not clear why you assume the zero point of the graph is a real story, as opposed to a made-up story. The fact that they changed it later isn't evidence it's real, just evidence that you can't turn a cult on a dime.