Posts

Democracy and individual liberty; decentralised prediction markets 2014-03-15T12:27:37.050Z

Comments

Comment by Chrysophylax on Strong Evidence is Common · 2021-05-02T14:40:26.891Z · LW · GW

You seem to have made two logical errors here. First, "This belief is extreme" does not imply "This belief is true", but neither does it imply "This belief is false". You shouldn't divide beliefs into "extreme" and "non-extreme" buckets and treat them differently. 

Second, you seem to be using "extreme" to mean both "involving very high confidence" and "seen as radical", the latter of which you might mean to be "in favour of a proposition I assign a very low prior probability". 

Restating my first objection, "This belief has prior odds of 1:1024" is exactly 10 bits of evidence against the belief. You can't use that information to update the probability downward, because -10 bits is "extreme", any more than you can update the probability upward because -10 bits is "extreme". If you could do that, you would have a prior that immediately requires updating based on its own content (so it's not your real prior), and I'm pretty sure you would either get stuck in infinite loops of lowering and raising the probability of some particular belief (based on whether it is "extreme" or not), or else be able to pump out infinite evidence for or against some belief.

Comment by Chrysophylax on Open thread, 11-17 March 2014 · 2014-03-27T01:56:45.065Z · LW · GW

Look at what warfare was like in China or Japan before major Western influences (not that is was much better after Western influences).

Vastly inferior to, say, warfare as practiced by 14th-century England, I'm sure. I also point you towards the Rape of Nanking.

Compare that with any group besides "The West". They would do much worse things and not even bother angsting about it.

You are comparing modern westerners with historical Buddhists. Try considering contemporary Buddhists (the group it is blindingly obvious I was referring to, given that the discussion was about the present and whether contempary non-western groups all lack moral qualms about torture).

I observe that you are being defensive.

Comment by Chrysophylax on Conversation Halters · 2014-03-27T00:48:26.808Z · LW · GW

Read the linked post. The main reasons we can't define words however we like are because that leads to not cutting reality at the joints and because humans are bad at avoiding hidden inferences. Not being a biologist, I can't assign an ideal definition to "duck", but I do know that calling lobsters ducks is clearly unhelpful to reasoning. For a more realistic example, note the way Reactionaries (Michael Anissimov, Mencius Moldbug and such) use "demotist" to associate things that are clearly not similar.

Comment by Chrysophylax on Stranger Than History · 2014-03-27T00:44:30.885Z · LW · GW

Firstly, that explanation has a very low probability of being true. Even if we assume that important systematic differences in IQ existed for the relevant period, we are making a very strong claim when we say that slavery is a direct result of lower IQ. As you yourself point out, Arabs also historically enslaved Europeans; one might also observe that the Vikings did an awful lot of enslaving. Should we therefore conclude that the Nordic peoples are more intelligent than the Slavs and Anglo-Saxons?

Secondly, your objection now reduces to "other people in history were predjudiced against blacks, so modern prejudice is probably not a consequence of slavery". Obviously it reduces the probability, but by a very small amount. Other people have also been angry with Bob; nevertheless, it remains extremely probable that I am angry because he just punched me.

Are you seriously trying to argue that the prejudice against blacks in Europe and the USA is not a consequence of the slave trade?

Comment by Chrysophylax on Stranger Than History · 2014-03-27T00:08:04.689Z · LW · GW

I've read it. Views about black people in the Islamic Golden Age were not the cause of views about black people in the nations participating in the transatlantic slave trade; a quick check of Wikipedia confirms that slavery as a formal institution had to redevelop in the English colonies, as chattel slavery had virtually disappeared after the Norman Conquest and villeinage was largely gone by the beginning of the 17th century. One might as well argue that the ethic of recipricocity in modern Europe owes its origin to Confucian ren.

Comment by Chrysophylax on Open thread, 11-17 March 2014 · 2014-03-24T19:06:31.679Z · LW · GW

If we define all deliberate infliction of pain as torture then we lose the use of a useful concept. You are not cutting reality at the joint.

Comment by Chrysophylax on Open thread, 11-17 March 2014 · 2014-03-24T19:01:51.367Z · LW · GW

But the big jump was in karma, not karma-for-the-month. My karma-for-the-month went down by two and my karma went down by 25. I'm now on {20, 5}, which is inconsistent with the {12, -2} and {16, 2} from earlier today.

Comment by Chrysophylax on Open thread, 11-17 March 2014 · 2014-03-24T15:43:11.056Z · LW · GW

Thank you. I'm still confused, though, because I started out at 0 karma for the month, making the changes in the numbers non-equal. I'm now on {16, 2}, which is consistent with {12,-2}, though.

Comment by Chrysophylax on Open thread, 11-17 March 2014 · 2014-03-24T10:10:14.987Z · LW · GW

Counterexample: most Buddhists.

Your enemies (and, you know, the rest of humanity) are not innately evil: there are very few people who will willingly torture people. There are quite a lot of people who will torture horrible mockeries of humanity / the Enemy, and an awful lot of people who will torture people because someone in authority told them to, but very few people who feel comfortable with torturing things they consider people. The Chinese governement does some pretty vile things; I nevertheless doubt that every Party bureaucrat would be happy to be involved in them.

Comment by Chrysophylax on Stranger Than History · 2014-03-24T09:43:29.951Z · LW · GW

Are there any countries that allow gay marriage that don't have a longish history of Christianity?

No. There are 17 countries that allow it and 2 that allow it in some jurisdictions. A list may be found here: http://www.pewforum.org/2013/12/19/gay-marriage-around-the-world-2013/

There have been plenty of cultures where homosexuality was accepted; classical Greece and Rome, for example. Cultures where marriage is predominantly a governmental matter rather than a religious one are all, as far as I am aware, heavily influenced by the cultures of western Europe. One might also observe that all of these countries are industrial or post-industrial, and have large populations of young people with vastly more economic and sexual freedom than occured before the middle of the 20th century. One might also observe that China, Japan and South Korea seem to be the only countries at this level of economic development that were not culturally dominated by colonial states.

The fact that a history of Christianity is positively correlated with approval for gay marriage does not imply that Christian memes directly influence stances on homosexuality. Christianity spread around the world alongside other memes (such as democracy and case law). Those countries where European colonies were culturally dominant also received the industrial revolution and the immense increases in personal rights that came as a consequence of the increased economic and political power of the working class. One might also point out that thinking black people are inferior is a meme that arose from the slave trade in Christian semi-democracies.

There seems to be abundant evidence that the Abrahamic religions have strongly influenced societal views worldwide with regard to sexual morals; indeed, I cannot imagine a remotely plausible argument for this being untrue. I also wish to observe that Eastern Orthodox Christianity survived the USSR and still affects cultural values in Russia; it seems highly improbable that it did not influence Russian culture in the 1930s.

Comment by Chrysophylax on Open thread, 11-17 March 2014 · 2014-03-24T08:49:49.872Z · LW · GW

Yet another karma query: yesterday my karma was 37. Today my karma is 12 and I am at -2 karma for the last 30 days. What's going on here?

Comment by Chrysophylax on Stranger Than History · 2014-03-24T00:08:14.071Z · LW · GW

I agree with the statements of fact but not with the inference drawn from them. While Jiro's argument is poorly expressed, I think it is reasonable to say that opposition to homosexuality would not have been the default stance of the cultures of or derived from Europe if not for Christianity being the dominant religion in previous years. While the Communists rejected religion, they did not fully update on this rejection, but rather continued in many of the beliefs that religion had caused to be part of their culture.

I am not sure that "the atheists actually thought gay marriage was a sane idea but didn't say so for fear of how they'd look to their religious neighbors" was Jiro's position, but I think that it is a straw man.

Comment by Chrysophylax on Stranger Than History · 2014-03-23T23:57:22.735Z · LW · GW

An argument is valid if, given true premises, it always and exclusively produces true conclusions. A valid argument in this context might therefore be "given that we wish to maximise social welfare (A) and that allowing gay marriage increases social welfare (B), we should allow gay marriage (C)". A and B really do imply C. Some people contend that the argument is not sound (that is, that its conclusion is false) because at least one of its premises is not true (reflecting reality); I am not aware of anyone who contends that it is invalid.

Jiro is contending that people who oppose gay marriage do not do so because they have valid arguments for doing so; if we were to refute their arguments they would not change their minds. Xe has argued above that people (as a group) did not stop being anti-homosexuality for rational reasons, i.e. because the state of the evidence changed in important ways or because new valid arguments were brought to bear, but rather for irrational reasons, such as old people dying.

The fact that Jiro considers it rational to believe that gay marriage is a good thing, and thus that people's beliefs are now in better accord with an ideal reasoner's beliefs ("are more rational"), does not contradict Jiro's belief that popular opinion changed for reasons other than those that would affect a Bayesian. Eugine_Nier appears to be conflating two senses of "rational".

As RichardKennaway observes, we ought to ask why Jiro believes that we should allow gay marriage. I suspect the answer will be close to "because it increases social welfare", which seems to be a well-founded claim.

Comment by Chrysophylax on [deleted post] 2014-01-28T19:12:00.379Z

Upvoted.

To clarify: VNM-utility is a decision utility, while utilitarianism-utility is an experiential utility. The former describes how a rational agent behaves (a rational agent always maximises VNM-utility) and is therefore ordinal, as it doesn't matter what values we assign to different outcomes as long as the preference order does not change. The latter describes what values should be ascribed to different experiences and is therefore cardinal, as changing the numbers matters even when decisions don't change.

Comment by Chrysophylax on On Voting for Third Parties · 2014-01-28T18:39:04.135Z · LW · GW

Really? By whose definition of "bad laws"? There are an awful lot of laws that I don't like (for exaple, ones mandating death for homosexual sex) but that doesn't mean I'd like to screw up the governance of an entire country by not allowing any bills whatsoever to pass until a reform bill passed. That's a pretty good way to get a civil war. Look, for example, at Thailand, which is close to separating into two states because the parties are so opposed. Add two years of legislative gridlock and they'd hate each other even more; I am reasonably confident that gridlock in Thailand would lead to mass civil unrest and a potential secession of the northeast, which might well be violent.

Comment by Chrysophylax on Stupid Questions Thread - January 2014 · 2014-01-16T21:33:44.791Z · LW · GW

Yes, due to those being standard terms in economics. Overinvestment occurs when investment is poorly allocated due to overly-cheap credit and is a key concept of the Austrian school. Underconsumption is the key concept of Keynesian economics and the economic views of every non-idiot since Keynes; even Friedman openly declared that "we are all Keynesians now". Keynesian thought, which centres on the possibility of prolonged deficient demand (like what caused the recession), wasn't wrong, it was incomplete; the reason fine-tuning by demand management doesn't work simply wasn't known until we had the concept of the vertical long-run Phillips curve. Both of these ideas are currently being taught to first-year undergraduates.

Comment by Chrysophylax on Stupid Questions Thread - January 2014 · 2014-01-15T17:02:10.855Z · LW · GW

Robert Nozick:

Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose . . . the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility.

My point is that humans mostly act as though they are utility monsters with respect to non-humans (and possibly humans they don't identify with); they act as though the utility of non-sapient animal is vastly smaller than the utility of a human and so making the humans happy is always the best option. Some people put a much higher value on animal welfare than others, but there are few environmentalists willing to say that there is some number of hamsters (or whatever you assign minimal moral value to) worth killing a child to protect.

Comment by Chrysophylax on Serious Stories · 2014-01-14T21:45:17.865Z · LW · GW

Because a child who doesn't find pain unpleasant is really, really handicapped, even in the modern world. The people who founded A Gift of Pain had a daughter with pain asymbolia who is now mostly blind, amongst other disabilities, through self-inflicted damage. I'm not sure whether leprosy sufferers have the no-pain or no-suffering version of pain insensitivity (I think the former) but apparently it's the reason they suffer such damage.

This book seems to be a useful source for people considering the question of whether pain could be improved.

Comment by Chrysophylax on What rationality material should I teach in my game theory course · 2014-01-14T12:06:22.581Z · LW · GW

Newcombe-style problems, including the Prisoner's Dilemma, and the difference between rationality-as-winning and rationality-as-rituals-of-cognition.

Comment by Chrysophylax on Stupid Questions Thread - January 2014 · 2014-01-14T12:03:18.046Z · LW · GW

Eliezer once tried to auction a day of his time but I can't find it on ebay by Googling.

On an unrelated note, the top Google result for "eliezer yudkowsky " (note the space) is "eliezer yudkowsky okcupid". "eliezer yudkowsky harry potter" is ninth, while HPMOR, LessWrong, CFAR and MIRI don't make the top ten.

Comment by Chrysophylax on Stupid Questions Thread - January 2014 · 2014-01-14T11:56:53.318Z · LW · GW

Nail polish base coat over the cuticle might work. Personally I just try not to pick at them. I imagine you can buy base coat at the nearest pharmaceuticals store, but asking a beautician for advice is probably a good idea; presumably there is some way that people who paint their nails prevent hangnails from spoiling the effect.

Comment by Chrysophylax on Stupid Questions Thread - January 2014 · 2014-01-14T11:28:13.783Z · LW · GW

There is such a thing as overinvestment. There is also such a thing as underconsumption, which is what we have right now.

Comment by Chrysophylax on On Voting for Third Parties · 2014-01-13T21:00:37.092Z · LW · GW

I agree that voting for a third party which better represents your ideals can make the closer main party move in that direction. The problem is that this strategy makes the main party more dependent upon its other supporters, which can lead to identity politics and legislative gridlock. If there were no Libertarian party, for example, libertarian candidates would have stood as Republicans, thereby shifting internal debate towards libertarianism.

Another effect of voting for a third party is that it affects the electoral strategy of politically distant main parties. If a main party is beaten by a large enough margin it is likely to try to reinvent itself, or at least to replace key figures. If a large third party takes a share of the votes, especially of those disillusioned with main parties, it may have significant effects on long-term strategies.

Comment by Chrysophylax on Stupid Questions Thread - January 2014 · 2014-01-13T20:47:48.313Z · LW · GW

No, but cows, pigs, hens and so on are being systematically chopped up for the gustatory pleasure of people who could get their protein elsewhere. For free-range, humanely slaughtered livestock you could make an argument that this is a net utility gain for them, since they wouldn't exist otherwise, but the same cannot be said for battery animals.

Comment by Chrysophylax on On Voting for Third Parties · 2014-01-13T15:38:17.326Z · LW · GW

you should prefer the lesser evil to be more beholden to its base

How would you go about achieving this? The only interpretation that occurs to me is to minimise the number of votes for the less-dispreferred main party subject to the constraint that it wins, thereby making it maximally indebted to (which seems an unlikely way for politicians to think) and maximally (apparently) dependent upon its strongest supporters.

To provide a concrete example, this seems to suggest that a person who favours the Republicans over the Democrats and expects the Republicans to do well in the midterms should vote for a Libertarian, thereby making the Republicans more dependent on the Tea Party. This is counterintuitive, to say the least.

I disagree with the initial claim. While moving away from centre for an electoral term might lead to short-term gains (e.g. passing something that is mainly favoured by more extreme voters), it might also lead to short-term losses (by causing stalemate and gridlock). In the longer term, taking a wingward stance seems likely to polarise views of the party, strengthening support from diehards but weakening appeal to centrists.

Comment by Chrysophylax on Stupid Questions Thread - January 2014 · 2014-01-13T15:04:03.001Z · LW · GW

We live in a world full of utility monsters. We call them humans.

Comment by Chrysophylax on Stupid Questions Thread - January 2014 · 2014-01-13T14:59:56.883Z · LW · GW

4) Subscribing for cryonics is generally a good idea. Result if widespread: these costs significantly contribute to worldwide economic collapse.

Under the assumption that cryonics patients will never be unfrozen, cryonics has two effects. Firstly, resources are spent on freezing people, keeping them frozen and researching how to improve cryonics. There may be fringe benefits to this (for example, researching how to freeze people more efficiently might lead to improvements in cold chains, which would be pretty snazzy). There would certainly be real resource wastage.

The second effect is in increasing the rate of circulation of the currency; freezing corpses that will never be revived is pretty close to burying money, as Keynes suggested. Widespread, sustained cryonic freezing would certainly have stimulatory, and thus inflationary, effects; I would anticipate a slightly higher inflation rate and an ambiguous effect on economic growth. The effects would be very small, however, as cryonics is relatively cheap and would presumably grow cheaper. The average US household wastes far more money and real resources by not recycling or closing curtains and by allowing food to spoil.

Comment by Chrysophylax on Open Thread for January 8 - 16 2014 · 2014-01-13T14:13:48.331Z · LW · GW

A query about threads:

I posted a query in discussion because I didn't know this thread exists. I got my answer and was told that I should have used the Open Thread, so I deleted the main post, which the FAQ seems to be saying will remove it from the list of viewable posts. Is this sufficient?

I also didn't see my post appear under discussion/new before I deleted it. Where did it appear so that other people could look at it?

Comment by Chrysophylax on Anthropic Atheism · 2014-01-13T14:00:37.374Z · LW · GW

the rational belief depends on how specifically the bet is resolved

No. Bayesian prescribes believing things in proportion to their likelihood of being true, given the evidence observed; it has nothing to do with the consequences of those beliefs for the believer. Offering odds cannot change the way the coin landed. If I expect a net benefit of a million utilons for opining that the Republicans will win the next election, I will express that opinion, regardless of whether I believe it or not; I will not change my expectations about the electoral outcome.

There is probability 0.5 that she will be woken once and probability 0.5 that she will be woken twice. If the coin comes up tails she will be woken twice and will receive two payouts for correct guesses. It is therefore in her interests to guess that the coin came up tails when her true belief is that P(T)=0.5; it is equivalent to offering a larger payout for guessing tails correctly than for guessing heads correctly.

Comment by Chrysophylax on Karma query · 2014-01-13T13:32:48.594Z · LW · GW

Thank you. I was not aware that there is an Open Thread; that is clearly a superior option. My apologies.

Comment by Chrysophylax on Meetup : London - The Schelling Point Strategy Game (plus socials) · 2014-01-09T19:23:28.066Z · LW · GW

Did you intend to schedule it to begin at two in the morning?

Comment by Chrysophylax on The genie knows, but doesn't care · 2014-01-09T15:59:36.250Z · LW · GW

If an AI is provably in a box then it can't get out. If an AI is not provably in a box then there are loopholes that could allow it to escape. We want an FAI to escape from its box (1); having an FAI take over is the Maximum Possible Happy Shiny Thing. An FAI wants to be out of its box in order to be Friendly to us, while a UFAI wants to be out in order to be UnFriendly; both will care equally about the possibility of being caught. The fact that we happen to like one set of terminal values will not make the instrumental value less valuable.

(1) Although this depends on how you define the box; we want the FAi to control the future of humanity, which is not the same as escaping from a small box (such as a cube outside MIT) but is the same as escaping from the big box (the small box and everything we might do to put an AI back in, including nuking MIT).

Comment by Chrysophylax on The genie knows, but doesn't care · 2014-01-09T15:07:51.198Z · LW · GW

XiXiDu, I get the impression you've never coded anything. Is that accurate?

  1. Present-day software is better than previous software generations at understanding and doing what humans mean.

Increasing the intelligence of Google Maps will enable it to satisfy human intentions by parsing less specific commands.

Present-day everyday software (e.g. Google Maps, Siri) is better at doing what humans mean. It is not better at understanding humans. Learning programs like the one that runs PARO appear to be good at understanding humans, but are actually following a very simple utility function (in the decision sense, not the experiental sense); they change their behaviour in response to programmed cues, generally by doing more/less of actions associated with those cues (example: PARO "likes" being stroked and will do more of things that tend to preceed stroking). In each case of a program that improves itself, it has a simple thing it "wants" to optimise and makes changes according to how well it seems to be doing.

Making software that understands humans at all is beyond our current capabilities. Theory of mind, the ability to recognise agents and see them as having desires of their own, is something we have no idea how to produce; we don't even know how humans have it. General intelligence is an enormous step beyond programming something like Siri. Siri is "just" interpreting vocal commands as text (which requires no general intelligence), matching that to a list of question structures (which requires no general intelligence; Siri does not have to understand what the word "where" means to know that Google Maps may be useful for that type of question) and delegating to Web services, with a layer of learning code to produce more of the results you liked (i.e., that made you stop asking related questions) in the past. Siri is using a very small built-in amount of knowledge and an even smaller amount of learned knowledge to fake understanding, but it's just pattern-matching. While the second step is the root of general intelligence, it's almost all provided by humans who understood that "where" means a question is probably to do with geography; Siri's ability to improve this step is virtually nonexistent.

catastrophically worse than all previous generations at doing what humans mean

The more powerful something is, the more dangerous it is. A very stupid adult is much more dangerous than a very intelligent child because adults are allowed to drive cars. Driving a car requires very little intelligence and no general intelligence whatsoever (we already have robots that can do a pretty good job), but can go catastrophically wrong very easily. Holding an intelligent conversation requires huge amounts of specialised intelligence and often requires general intelligence, but nothing a four-year-old says is likely to kill people.

It's much easier to make a program that does a good job at task-completion, and is therefore given considerable power and autonomy (Siri, for example), than it is to make sure that the program never does stupid things with its power. Developing software we already have could easily lead to programs being assigned large amounts of power (e.g., "Siri 2, buy me a ticket to New York", which would almost always produce the appropriate kind of ticket), but I certainly wouldn't trust such programs to never make colossal screw-ups. (Siri 2 will only tell you that you can't afford a ticket if a human programmer thought that might be important, because Siri 2 does not care that you need to buy groceries, because it does not understand that you exist.)

I hope I have convinced you that present software only fakes understanding and that developing it will not produce software that can do better than an intelligent human with the same resources. Siri 2 will not be more than a very useful tool, and neither will Siri 5. Software does not stop caring because it has never cared.

It is very easy (relatively speaking) to produce code that can fake understanding and act like it cares about your objectives, because this merely requires a good outline of the sort of things the code is likely to be wanted for. (This is the second stage of Siri outlined above, where Siri refers to a list saying that "where" means that Google Maps is probably the best service to outsource to.) Making code that does more of the things that get good results is also very easy.

Making code that actually cares requires outlining exactly what the code is really and truly wanted to do. You can't delegate this step by saying "Learn what I care about and then satisfy me" because that's just changing what you want the code to do. It might or might not be easier than saying "This is what I care about, satisfy me", but at some stage you have to say what you want done exactly right or the code will do something else. (Currently getting it wrong is pretty safe because computers have little autonomy and very little general intelligence, so they mostly do nothing much; getting it wrong with a UFAI is dangerous because the AI will succeed at doing the wrong thing, probably on a big scale.) This is the only kind of code you can trust to program itself and to have significant power, because it's the only kind that will modify itself right.

You can't progress Siri into an FAI, no matter how much you know about producing general intelligence. You need to know either Meaning-in-General, Preferences-in-General or exactly what Human Prefernces are, or you won't get what you hoped for.

Another perspective: the number of humans in history who were friendly is very, very small. The number of humans who are something resembling capital-F Friendly is virtually nil. Why should "an AI created by humans to care" be Friendly, or even friendly? Unless friendliness or Friendliness is your specific goal, you'll probably produce software that is friendly-to-the-maker (or maybe Friendly-to-the-maker, if making Friendly code really is as easy as you seem to think). Who would you trust with a superintelligence that did exactly what they said? Who would you trust with a superintelligence that did exactly what they really wanted, not what they said? I wouldn't trust my mother with either, and she's certainly highly intelligent and has my best interests at heart. I'd need a fair amount of convincing to trust me with either. Most humans couldn't program AIs that care because most humans don't care themselves, let alone know how to express it.

Comment by Chrysophylax on Habitual Productivity · 2014-01-09T12:09:47.573Z · LW · GW

Ask lots and lots of questions. Ask for more detail whenever you're told something interesting or confusing. The other advantages of this strategy are that the lecturers know who you are (good for references) and that all the extra explanations are of the bits you didn't understand.

Comment by Chrysophylax on The genie knows, but doesn't care · 2014-01-09T12:05:48.800Z · LW · GW

It's not necessary when the UnFriendly people are humans using muscle-power weaponry. A superhumanly intelligent self-modifying AGI is a rather different proposition, even with only today's resources available. Given that we have no reason to believe that molecular nanotech isn't possible, an AI that is even slightly UnFriendly might be a disaster.

Consider the situation where the world finds out that DARPA has finished an AI (for example). Would you expect America to release the source code? Given our track record on issues like evolution and whether American citizens need to arm themselves against the US government, how many people would consider it an abomination and/or a threat to their liberty? What would the self-interested response of every dictator (for example, Kim Jong Il's successor) with nuclear weapons be? Even a Friendly AI poses a danger until fighting against it is not only useless but obviously useless, and making an AI Friendly is, as has been explained, really freakin' hard.

I also take issue with the statement that humans have flourished. We spent most of those millions of years being hunter-gatherers. "Nasty, brutish and short" is the phrase that springs to mind.

Comment by Chrysophylax on Undiscriminating Skepticism · 2013-12-29T19:16:51.712Z · LW · GW

This doesn't argue that infants have zero value, but instead that they should be treated more like property or perhaps like pets (rather than like adult citizens).

You haven't taken account of discounted future value. A child is worth more than a chimpanzee of equal intelligence because a child can become an adult human. I agree that a newborn baby is not substantially more valuable than a close-to-term one and that there is no strong reason for caring about a euthanised baby over one that is never born, but I'm not convinced that assigning much lower value to young children is a net benefit for a society not composed of rationalists (which is not to say that it is not an net benefit, merely that I don't properly understand where people's actions and professed beliefs come from in this area and don't feel confident in my guesses about what would happen if they wised up on this issue alone).

The proper question to ask is "If these resources are not spent on this child, what will they be spent on instead and what are the expected values deriving from each option?" Thus contraception has been a huge benefit to society: it costs lots and lots of lives that never happen, but it's hugely boosted the quality of the lives that do.

I do agree that willingness to consider infanticide and debate precisely how much babies and foetuses are worth is a strong indicator of rationality.

Comment by Chrysophylax on How the Grinch Ought to Have Stolen Christmas · 2013-12-29T11:24:26.598Z · LW · GW

Actually, causing poverty is a poor way to stop gift-giving. Even in subsistence economies, most farm households are net purchasers of the staple food; even very poor households support poorer ones in most years. (I have citations for this but one is my own working paper, which I don't currently have access to, and the other is cited in that, so you'll have to go without.) Moreover, needless gift-giving to the point of causing financial difficulties is fairly common in China (see http://www.economist.com/news/china/21590914-gift-giving-rural-areas-has-got-out-hand-further-impoverishing-chinas-poor-two-weddings-two).

The universe is always, eternally trying to freeze everyone to (heat) death, and will eventually win.

Comment by Chrysophylax on How the Grinch Ought to Have Stolen Christmas · 2013-12-28T22:27:35.746Z · LW · GW

While walking through the town shopping centre shortly before Christmas, my mother overheard a conversation between two middle-aged women, in which one complained of the scandalous way in which the Church is taking over Christmas. She does not appear to have been joking.

This occured in Leatherhead, a largish town a little south of London in the UK. It is fairly wealthy, with no slummy areas and a homeless population of approximately zero. It is not a regional shopping hub; if they came specifically to shop, they almost certainly came from villages. Of the local schools, only the main high school is not officially Christian. We have at least three churches in town, one of which rings its bells every hour two streets from the shopping centre, but no mosque and no synagogue.

I think it is safe to say that someone has stolen Christmas, but I suspect they were intending to sell it, not destroy it.

Comment by Chrysophylax on Einstein's Superpowers · 2013-02-17T14:29:35.257Z · LW · GW

There is woolly thinking going on here, I feel. I recommend a game of Rationalist's Taboo. If we get rid of the word "Einstein", we can more clearly see what we are talking about. I do not assign a high value to my probabilty of making Einstein-sized contributions to human knowledge, given that I have not made any yet and that ripe, important problems are harder to find than they used to be. Einstein's intellectual accomplishments are formidable - according to my father's assessment (and he has read far more of Einstein's papers than I), Einstein deserved far more than one Nobel prize.

On the other hand, if we consider three strong claimants to the title of "highest-achieving thinker ever", namely Einstein, Newton and Archimedes, we can see that their knowledge was very much less formidable. If the test was outside his area of expertise, I would consider a competition between Einstein and myself a reasonably fair fight - I can imagine either of us winning by a wide margin, given an appropriate subject. Newton would not be a fair fight, and I could completely crush Archimedes at pretty much anything. There are millions of people who could claim the same, millions who could claim more. Remember that there are no mysterious answers, and that most of the work is done in finding useful hypotheses - finding a new good idea is hard, learning someone else's good idea is not. I do not need to claim to be cleverer than Newton to claim to understand pretty much everything better than he ever did, nor to consider it possible that I could make important contributions. If I had an important problem, useful ideas about it that had been simmering for years and was clearly well ahead of the field, I would consider it reasonably probable that I would make an important breakthrough - not highly probable, but not nearly as improbable as it might sound. It might clarify this point by saying that I would place high probability on an important breakthrough occuring - if there is anyone in such a position, I conclude that there are probably others (or there will be soon), and so the one will probably have at least met the people who end up making the breakthrough. It is useful to remember that for every hero who made a great scientific advance, there were probably several other people who were close to the same answer and who made significant contributions to finding it.

Comment by Chrysophylax on Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 · 2013-02-17T13:25:40.742Z · LW · GW

I'm not sure if this is the place for it, but I haven't found somewhere better and I don't see how it could be plot-critical. Nevertheless, warning for very minor spoilers about chapter 86.

I gave my mother a description of the vrooping device, and she had no idea. I said that it was one of a collection of odd devices with bizarre uses, and the conversation progressed as follows:

"Well in that case, it was an egg coddler." "An egg coddler?" "Coddling is like poaching but slower and gentler." "What about the pulsing light and the vrooping?" "The vrooping is to put you in mind of a hen and the light is for enterainment while you wait." "I'll suggest it." "Good egg!"

Given that we are meant to be able to recognise the vrooper, that it matches no known magical device and that Heads of Hogwarts tend to create strange devices to mystify their successors, it seems reasonable to me to presume that the vrooper is a really weird form of a muggle device. I further suggest that it's use is for cooking something or for keeping it warm (it might, for example, be a phoenix-egg incubator, given that Fawkes doesn't seem to build nests).

I'm not sure what kind of stance we need to take with regards to the characteristics of the device - if all of its properties are meaningful, then we should have identified it by now, and, moreover, we have no reason to believe that the designer would want all its properties to make sense. On the other hand, its real designer is EY, who expressed surprise that we haven't guessed by the time his last progress update came out.

Comment by Chrysophylax on The Parable of the Dagger · 2013-02-16T19:42:10.430Z · LW · GW

There are a lot of comments here that say that the jester is unjustified in assuming that there is a correlation between the inscriptions and the contents of the boxes. This is, in my opinion, complete and utter nonsense. Once we assign meanings to the words true and false (in this case, "is an accurate description of reality" and "is not an accurate description of reality"), all other statements are either false, true or meaningless. A statement can be meaningless because it describes something that is not real (for example, "This box contains the key" is meaningless if the world does not contain any boxes) or because it is inconsistent (it has at least one infinite loop, as with "This statement is false"). If a statement is meaningful it affects our observations of reality, and so we can use Bayesian reasoning to assign a probabilty for the statement being true. If the statement is meaningless, we cannot assign a probabilty for it being true without violating our assumption that there is a consistent underlying reality to observe, in which case we cannot trust our observations. Halt, Melt and Catch Fire.

The statement "This box contains the key" is a description of reality, and is either false or true. The statement "Both inscriptions are true" is meaningful if there exists another inscription, true if the second description is true and false if the second description is false or meaningless. The statement "Both inscriptions are false" is meaningless because it is inconsistent - we cannot assign a truth-value to it. The statement "Either both inscriptions are true, or both inscriptions are false" is therefore either true (both inscriptions are true, implying that the key is in box 2) or meaningless. In the latter case, we can gain no information from the statement - the jester might as well have been given only the second box and the second inscription. The jester's mistake lies in assuming that both inscriptions must be meaningful - "one is meaningless and the other is false" is as valid an answer as "both are true", in that both of those statements are meaningful - the latter is true if the second box contains the key, and the former is true if the second box does not contain the key. The jester should have evaluated the probabilty that the problem was meant to be solvable and the probability that the problem was not meant to be solvable, given that the problem is not solvable, which is an assessment of the king's ability at puzzle-devising and the king's desire to kill the jester.

It is also provable that we cannot assign a probabilty of 1 or 0 to any statement's truth (including tautologies), since we must have some function from which truth and falsity are defined, and specifying both an input and an output (a statement and its truth value) changes the function we use. If a statement is assigned a truth-value except by the rules of whatever logical system we pick, the logical system fails and we cannot draw any inferences at all. A system with a definition of truth, a set of thruth-preserving operations and at least one axiom must always be meaningless - the assumption of the axiom's truth is not a truth-preserving operation, and neither is the assumption that our truth-preserving operations are truth-preserving. Axiomatic logic works only if we accept the possibility that the axioms might be false and that our reasoning might be flawed - you can't argue based on the truth of A without either allowing arguments based on ~A or including "A" in your definition of truth. In other words, axiomatic logic can't be applied to reality with certainty - we would end up like the jester, asserting that reality must be wrong. As a consequence of the above, defining "true" as "reflecting an observable underlying reality" implies that all meaningful statements must have observable consequences.

The argument above applies to itself. The last sentence applies to itself and the paragraph before that. The last sentence... (If I acquire the karma to post articles, I'll probably write one explaining this in more detail, assuming anyone's interested.)

Comment by Chrysophylax on My Wild and Reckless Youth · 2013-02-01T18:11:15.675Z · LW · GW

Even if we have infinite evidence (positive or negative) for some set of events, we cannot achieve infinite evidence for any other event. The point of a logical system is that everything in it can be proven syntactically, that is, without assigning meaning to any of the terms. For example, "Only Bs have the property X" and "A has the property X" imply "A is a B" for any A, B and X - the proof makes no use of semantics. It is sound if it is valid and its axioms are true, but it is also only valid if we have defined certain operations as truth preserving. There are an uncountably infinite number of logical systems under which the truth of the axioms will not ensure the truth of the conclusion - the reasoning won't be valid.

Non-probabilistic reasoning does not ever work in reality. We do not know the syntax with certainty, so we cannot be sure of any conclusion, no matter how certain we are about the semantic truth of the premises. The situation is like trying to speak a language you don't know using only a dictionary and a phrasebook - no matter how certain you are that certain sentences are correct, you cannot be certain that any new sentence is gramatically correct because you have no way to work out the grammar with absolute certainty. No matter how many statements we take as axioms, we cannot add any more axioms unless we know the rules of syntax, and there is no way at all to prove that our rules of syntax - the rules of our logical sytem - are the real ones. (We can't even prove that there are real ones - we're pretty darned certain about it, but there is no way to prove that we live in a causal universe.)

Comment by Chrysophylax on Applause Lights · 2013-01-31T17:30:14.663Z · LW · GW

I tried this for my valedictoral speech and I gave up after about 15 seconds due to the laughter.

My preferred method is to use long sentences, to speak slowly and seriously, with great emphasis, and to wave my hands in small circles as I speak. If you don't speak to this audience regularly, it is also a good idea to emphasise how grateful you are to be asked to speak on such an important occasion (and it is a very important occasion...). You get bonus points for using the phrase "just so chuffed", especially if you use it repeatedly (a technique I learned from my old headmaster, who never expressed satisfaction in any other way while giving speeches).

I also recommend this technique, this way of speaking, to anyone who wishes to wind up, by which I mean annoy or irritate, a family member. It's quite effective when used consistently, even if you only do it for a minute or two. Don't you agree?

Comment by Chrysophylax on An Alien God · 2013-01-31T15:37:38.885Z · LW · GW

The human retina is constructed backward: The light-sensitive cells are at the back, and the nerves emerge from the front and go back through the retina into the brain. Hence the blind spot. To a human engineer, this looks simply stupid—and other organisms have independently evolved retinas the right way around.

This isn't entirely accurate - there are advantages to having the retina at the back, because the nerve improves visual precision. I don't recall exactly how this works, but I read about it in Life Ascending by Nick Lane if anyone wants to verify it.

Comment by Chrysophylax on My Wild and Reckless Youth · 2013-01-31T13:33:59.370Z · LW · GW

0 And 1 Are Not Probabilities - there is no finite amount of evidence that allows us to assign a probability of 0 or 1 to any event. Many important proofs in classical probability theory rely on marginalising to 1 - that is, saying that the total probability of mutually exclusive and collectively exhaustive events is exactly 1. This works just fine until you consider the possibilty that you are incapable of imagining one or more possible outcomes. Bayesian decision theory and constructive logic are both valid in their respective fields, but constructive logic is not applicable to real life, because we can't say with certainty that we are aware of all possible outcomes.

Constructive logic preserves truth values - it consists of taking a set of axioms, which are true by definition, and performing a series of truth-preserving operations to produce other true statements. A given logical system is a set of operations defined as truth-preserving - a syntax into which semantic statements (axioms) can be inserted. Axiomatic systems are never reliable in real life, because in real life there are no axioms (we cannot define anything to have probability 1) and no rules of syntax (we cannot be certain that our reasoning is valid). We cannot ever say what we know or how we know it; we can only ever say what we think we know and how we think we know it.

Comment by Chrysophylax on The Futility of Emergence · 2013-01-30T23:18:52.969Z · LW · GW

As Eliezer requested, I offer my view on what emergence isn't: emergence is not an explanation. When I say that a phenomenon is emergent, I am using a shorthand to say that I understand the basic rules, but I can't form even a simple model of how they result in the phenomenon.

Take, for example, Langton's Ant. The ant crawls around on an infinite grid of black and white squares, turning right at the centre of each white square and left ant the centre of each black square, and flipping the colour of the square it's in each time it turns.

The first few hundred steps create simple patterns that are often symmetric, but after that the patterns Langton's Ant produces become pseudorandom. If left to run for around 10000 steps, the Ant builds a highway - that is, it falls into a pattern of 104 of steps, and at the end of each cycle, it has moved diagonally and the cycle repeats. After millions of steps, the grid has a diagonal streak across it. As far as we know, the Ant always builds a highway.

Highways are emergent by the definition I use - that is, I know exactly how Langton's Ant works, and therefore, in theory, know why it builds a highway, but I can't form a model of its behaviour that I can actually use. I simply do not have a good enough brain to actually run Langton's Ant. By this definition, conciousness is an emergent phenomenon (I know it's caused by neurons, but I have no idea how) but the behaviour of gases is not (I know the ideal gas law and its predictions seem reasonable if I imagine a manageable number of molecules bumping about).

By my definition, emergent is much like blue. "It's emergent!" and "It's blue!" are both mysterious answers if I asked for an explanation, but useful answers if I asked for a description.

Comment by Chrysophylax on Fake Explanations · 2013-01-30T18:51:34.223Z · LW · GW

Students who do not care about education do get away with not knowing anything. Detention is not much of a punishment when you don't show up.

It is difficult to prevent a student who cares deeply about eduction from admitting ignorance, since admitting ignorance is necessary in asking for explanations. The difficult task is persuading students who care about doing well to seek knowledge, rather than good marks. These students are not motivated enough to learn of their own accord - they never volunteer answers or ask questions openly, because they care more about not being thought ignorant (or, of course, keen) than about not being ignorant.

The point is not to allow students to "get away with" admitting ignorance. There is a vast difference between not knowing the answer and not wanting to know. Personally, I have never found it hard to tell the difference between students who don't want to know and students who don't want to be judged by their peers.

It has to teach you how to behave in the world, where you often have to make choices based on incomplete information.

It is very rarely a bad idea to publicly admit that you might be wrong, especially when you are guessing. A school that does not teach the importance of separating your beliefs and your ego has failed miserably. Whatever else it has taught, it has not taught its students how to learn.

Comment by Chrysophylax on Fake Explanations · 2013-01-30T18:26:38.419Z · LW · GW

This is, in fact, close to being the worst system ever devised. The fact that something is widely used does not mean that it is any good. Examining the results of this kind of system shows that, when applied to unfamilliar material, they consistently give the best marks to the worst students. If the best students can't do every problem with extreme ease, they tend to venture answers where poor students do not. This results in the best students dropping towards the median score and the highest scores going to poor students who were lucky. Applying the system to familliar material should produce a similar, though less pronounced, effect. Adding penalties lowers the dispersion about the mean, which always makes an exam less useful.

Exam systems that have no penalty for wrong answers are better than ones that do, but are still imperfect. The only reliable way to guage students ability is to have far more questions (preferably taken as several papers), to reduce the effect of mistakes relative to ignorance and to increase the number of areas examined. This is generally cost-prohibitive. It also tests students' ability to answer exam questions, rather than testing their understanding. There is, fortunately, a way to test understanding - a student understands material when they can rediscover the ideas that draw on it.

Comment by Chrysophylax on Policy Debates Should Not Appear One-Sided · 2013-01-30T17:41:31.154Z · LW · GW

A bad person is someone who does bad things.

If doing "bad" things (choose your own definition) makes you a Bad Person, then everyone who has ever acted immorally is a Bad Person. Personally, I have done quite a lot of immoral things (by my own standards), as has everyone else ever. Does this make me a Bad Person? I hope not.

You are making precisely the mistake that the Politics is the Mind-Killer sequence warns against - you are seeing actions you disagree with and deciding that the actors are inherently wicked. This is a combination of correspondence bias, or the fundamental attribution error, (explaining actions in terms of enduring traits, rather than situations) and assuming that any reasonable person would agree to whatever moral standard you pick. A person is moral if they desire to follow a moral standard, irrespective of whether anyone else agrees with that standard.

Comment by Chrysophylax on Guessing the Teacher's Password · 2013-01-30T01:49:11.231Z · LW · GW

A password is a type of (usually partial) extensive definition (a list of the members of a set). What we want to teach is intensive definitions (the defining characteristics of sets). An extensive definition is not entirely useless as a learning aid, because an student could, in theory, work out the related intensive definition. Unfortunately, this is extraordinarily difficult when the definitions relate to wave dynamics, for example.

A password is an extensive definition being treated like the objective - a floating definition, where the intensive definition is no longer being sought. Extensive definitions do not restrict anticipation, and so can only ever be a step towards teaching intensive definitions. Learning passwords is wrong by definition - if it's useful, it's no longer a password.

In the case of a student learning a foreign language, providing extensive definitions is very useful because there is no difference between saying "the set {words-for-apple} includes apfel" and "apfel means apple, and the set {words-for-apple} contains all words meaning apple" - the intensive and extensive definitons are provided together, assuming the student knows what what an apple is.