Posts

AI origin question 2015-11-01T20:35:18.911Z · score: 1 (2 votes)
[LINK] Steven Pinker on "The false allure of group selection" 2012-06-19T20:14:15.842Z · score: 4 (6 votes)
[LINK] "Straight and crooked thinking," by Robert H. Thouless 2011-06-04T02:46:07.975Z · score: 5 (8 votes)
[LINK] Subculture roles (by Brad Hicks) 2011-05-18T03:00:43.252Z · score: 8 (13 votes)
Gödel and Bayes: quick question 2011-04-14T06:12:05.618Z · score: 1 (12 votes)
Gettier in Zombie World 2011-01-23T06:44:29.137Z · score: 1 (4 votes)

Comments

Comment by hairyfigment on Unusual medical event led to concluding I was most likely an AI in a simulated world · 2017-09-18T21:09:59.130Z · score: 0 (0 votes) · LW · GW

Sort of reminds me of that time I missed out on a lucid dream because I thought I was in a simulation. In practice, if you see a glitch in the Matrix, it's always a dream.

I find it interesting that we know humans are inclined to anthropomorphize, or see human-like minds everywhere. You began by talking about "entities", as if you remembered this pitfall, but it doesn't seem like you looked for ways that your "deception" could stem from a non-conscious entity. Of course the real answer (scenario 1) is basically that. You have delusions, and their origin lies in a non-conscious Universe.

Comment by hairyfigment on Intrinsic properties and Eliezer's metaethics · 2017-09-18T20:58:15.656Z · score: 0 (0 votes) · LW · GW

The second set of brackets may be the disconnect. If "their" refers to moral values, that seems like a category error. If it refers to stories etc, that still seems like a tough sell. Nothing I see about Peterson or his work looks encouraging.

Rather than looking for value you can salvage from his work, or an 'interpretation consistent with modern science,' please imagine that you never liked his approach and ask why you should look at this viewpoint on morality in particular rather than any of the other viewpoints you could examine. Assume you don't have time for all of them.

If that still doesn't help you see where I'm coming from, consider that reality is constantly changing and "the evolutionary process" usually happened in environments which no longer exist.

Comment by hairyfigment on Intrinsic properties and Eliezer's metaethics · 2017-09-18T20:11:19.709Z · score: 0 (0 votes) · LW · GW

Without using terms such as "grounding" or "basis," what are you saying and why should I care?

Comment by hairyfigment on Stupid Questions September 2017 · 2017-09-18T19:48:14.946Z · score: 0 (0 votes) · LW · GW

I repeat: show that none of your neurons have consciousness separate from your own.

Why on Earth would you think Searle's argument shows anything, when you can't establish that you aren't a Chinese Gym? In order to even cast doubt on the idea that neurons are people, don't you need to rely on functionalism or a similar premise?

Comment by hairyfigment on Stupid Questions September 2017 · 2017-09-18T06:01:42.704Z · score: 0 (0 votes) · LW · GW

What about it seems worth refuting?

The Zombie sequence) may be related. (We'll see if I can actually link it here.) As far as the Chinese Room goes:

  • I think a necessary condition for consciousness is approximating a Bayesian update. So in the (ridiculous) version where the rules for speaking Chinese have no ability to learn, they also can't be conscious.
  • Searle talks about "understanding" Chinese. Now, the way I would interpret this word depends on context - that's how language works - but normally I'd incline towards a Bayesian interpretation of "understanding" as well. So this again might depend on something Searle left out of his scenario, though the question might not have a fixed meaning.
  • Some versions of the "Chinese Gym" have many people working together to implement the algorithm. Now, your neurons are all technically alive in one sense. I genuinely feel unsure how much consciousness a single neuron can have. If I decide to claim it's comparable to a man blindly following rules in a room, I don't think Searle could refute this. (I also don't think it makes sense to say one neuron alone can understand Chinese; neurologists, feel free to correct me.) So what is his argument supposed to be?
Comment by hairyfigment on Open thread, September 11 - September 17, 2017 · 2017-09-16T22:44:35.984Z · score: 0 (0 votes) · LW · GW

Do you know what the Electoral College is? If so, see here:

The single most important reason that our model gave Trump a better chance than others is because of our assumption that polling errors are correlated.

Comment by hairyfigment on Open thread, September 11 - September 17, 2017 · 2017-09-16T19:35:47.599Z · score: 0 (0 votes) · LW · GW

Arguably claims about Donald Trump winning enough states - but Nate Silver didn't assume independence, and his site still gave the outcome a low probability.

Comment by hairyfigment on Open thread, Jul. 17 - Jul. 23, 2017 · 2017-08-19T22:41:39.829Z · score: 0 (0 votes) · LW · GW

Not exactly. MIRI and others have research on logical uncertainty, which I would expect to eventually reduce the second premise to induction. I don't think we have a clear plan yet showing how we'll reach that level of practicality.

Justifying a not-super-exponentially-small prior probability for induction working feels like a category error. I guess we might get a kind of justification from better understanding Tegmark's Mathematical Macrocosm hypothesis - or, more likely, understanding why it fails. Such an argument will probably lack the intuitive force of 'Clearly the prior shouldn't be that low.'

Comment by hairyfigment on What Are The Chances of Actually Achieving FAI? · 2017-08-19T22:31:06.730Z · score: 0 (0 votes) · LW · GW

I would only expect the latter if we started with a human-like mind. A psychopath might care enough about humans to torture you; an uFAI not built to mimic us would just kill you, then use you for fuel and building material.

(Attempting to produce FAI should theoretically increase the probability by trying to make an AI care about humans. But this need not be a significant increase, and in fact MIRI seems well aware of the problem and keen to sniff out errors of this kind. In theory, an uFAI could decide to keep a few humans around for some reason - but not you. The chance of it wanting you in particular seems effectively nil.)

Comment by hairyfigment on Double Crux — A Strategy for Resolving Disagreement · 2017-03-20T18:32:33.182Z · score: 0 (0 votes) · LW · GW

Yes, but as it happens that kind of difference is unnecessary in the abstract. Besides the point I mentioned earlier, you could have a logical set of assumptions for "self-hating arithmetic" that proves arithmetic contradicts itself.

Completely unnecessary details here.

Comment by hairyfigment on Double Crux — A Strategy for Resolving Disagreement · 2017-03-14T01:10:56.855Z · score: 0 (0 votes) · LW · GW

Not if they're sufficiently different. Even within Bayesian probability (technically) we have an example in the hypothetical lemming race with a strong Gambler's Fallacy prior. ("Lemming" because you'd never meet a species like that unless someone had played games with them.)

On the other hand, if an epistemological dispute actually stems from factual disagreements, we might approach the problem by looking for the actual reasons people adopted their different beliefs before having an explicit epistemology. Discussing a religious believer's faith in their parents may not be productive, but at least progress seems mathematically possible.

Comment by hairyfigment on Why Don't Rationalists Win? · 2017-03-03T01:29:27.157Z · score: 0 (0 votes) · LW · GW

How could correcting grammar be good epistemics? The only question of fact there is a practical one - how various people will react to the grammar coming out of your word-hole.

Comment by hairyfigment on Forcing Anthropics: Boltzmann Brains · 2017-02-13T21:31:52.256Z · score: 0 (0 votes) · LW · GW
  1. I'm using probability to represent personal uncertainty, and I am not a BB. So I think I can legitimately assign the theory a distribution to represent uncertainty, even if believing the theory would make me more uncertain than that. (Note that if we try to include radical logical uncertainty in the distribution, it's hard to argue the numbers would change. If a uniform distribution "is wrong," how would I know what I should be assigning high probability to?)

  2. I don't think you assign a 95% chance to being a BB, or even that you could do so without severe mental illness. Because for starters:

  3. Humans who really believe their actions mean nothing don't say, "I'll just pretend that isn't so." They stop functioning. Perhaps you meant the bar is literally 5% for meaningful action, and if you thought it was 0.1% you'd stop typing?

  4. I would agree if you'd said that evolution hardwired certain premises or approximate priors into us 'because it was useful' to evolution. I do not believe that humans can use the sort of pascalian reasoning you claim to use here, not when the issue is BB or not BB. Nor do I believe it is in any way necessary. (Also, the link doesn't make this clear, but a true prior would need to include conditional probabilities under all theories being considered. Humans, too, start life with a sketch of conditional probabilities.)

Comment by hairyfigment on Interview with Nassim Taleb 'Trump makes sense to a grocery store owner' · 2017-02-13T08:41:27.154Z · score: 0 (0 votes) · LW · GW

OK, they gave him a greater chance than I thought of winning the popular vote. I can't tell if that applies to the polls-plus model which they actually seemed to believe, but that's not the point. The point is, they had a model with a lot of uncertainty based on recognizing the world is complicated, they explicitly assigned a disturbing probability to the actual outcome, and they praised Trump's state/Electoral College strategy for that reason.

Comment by hairyfigment on Forcing Anthropics: Boltzmann Brains · 2017-02-13T08:02:25.863Z · score: 0 (0 votes) · LW · GW

Seems like you're using a confusing definition of "believe", but the point is that I disagree about our reasons for rejecting the claim that you're a BB.

Note that according to your reasoning, any theory which says you're a BB must give us a uniform distribution for all possible experiences. So rationally coming to assign high probability to that theory seems nearly impossible if your experience is not actually random.

Comment by hairyfigment on Forcing Anthropics: Boltzmann Brains · 2017-02-11T22:51:10.334Z · score: 0 (0 votes) · LW · GW

One man's modus ponens is another man's modus tollens. I don't even believe that you believe the conclusion.

Comment by hairyfigment on Stupid Questions February 2017 · 2017-02-09T23:06:10.616Z · score: 0 (0 votes) · LW · GW

I am completely ignoring the very important part of defining God

That is indeed the chief problem here. I'm assuming you're talking about the prior probability which we have before looking at the evidence.

Comment by hairyfigment on Interview with Nassim Taleb 'Trump makes sense to a grocery store owner' · 2017-02-09T11:33:55.827Z · score: 2 (2 votes) · LW · GW

Not to put too fine a point on it, but I bet all of my money he didn't predict Trump would win the Electoral College while losing the popular vote by millions. That article gives no hint he even knows it happened. Meanwhile, Five Thirty-Eight supposedly based most of the probability mass they assigned to Trump winning on a technical EC victory, and they said he had one chance in four (one in three earlier).

The fact that a 25% chance can happen with potentially devastating consequences is roughly why Trump the outsider might blow up America, though I'd give that no more than a 13% chance.

Comment by hairyfigment on Forcing Anthropics: Boltzmann Brains · 2017-02-08T18:24:25.155Z · score: 0 (0 votes) · LW · GW

I don't think that's descriptively true at all. Regardless of whether or not I see a useful way to address it, I still wouldn't expect to dissolve momentarily with no warning.

Now, this may be because humans can't easily believe in novel claims. But "my" experience certainly seems more coherent than one would expect a BB's to seem, and this calls out for explanation.

Comment by hairyfigment on Why I'm working on satisficing intelligence augmentation (and not blogging about it much) · 2017-02-07T00:22:34.307Z · score: 0 (0 votes) · LW · GW

If it can't be solved, how will MIRI know?

For one, they wouldn't find a single example of a solution. They wouldn't see any fscking human beings maintaining any goal not defined in terms of their own perceptions - eg, making others happy, having an historical artifact, or visiting a place where some event actually happened - despite changing their understanding of our world's fundamental reality.

If I try to interpret the rest of your response charitably, it looks like you're saying the AGI can have goals wholly defined in terms of perception, because it can avoid wireheading via satisficing. That seems incompatible with what you said before, which again invoked "some abstract notion of good and bad" rather than sensory data. So I have to wonder if you understand anything I'm saying, or if you're conflating ontological crises with some less important "paradigm shift" - something, at least, that you have made no case for caring about.

Comment by hairyfigment on Civil resistance and the 3.5% rule · 2017-02-06T07:36:59.303Z · score: 0 (0 votes) · LW · GW

A misnamed idea which assumes there are effectively no costs to accommodating the minority.

Comment by hairyfigment on Why I'm working on satisficing intelligence augmentation (and not blogging about it much) · 2017-02-06T07:35:32.444Z · score: 1 (1 votes) · LW · GW

You're talking about an ontological crisis, though Aribital has a slightly different term. Naturally people have started to work on the problem and MIRI believes it can be solved.

It also seems like the exact same issue arises with satisficing, and you've hidden this fact by talking about "some abstract notion of good and bad" without explaining how the AGI will relate this notion to the world (or distribution) that it thinks it exists in.

Comment by hairyfigment on Crisis of Faith · 2017-01-31T01:26:19.490Z · score: 0 (0 votes) · LW · GW

Oh, you actually believe this crap. Then you should be ashamed of yourself.

Comment by hairyfigment on Crisis of Faith · 2017-01-26T19:07:09.771Z · score: 0 (0 votes) · LW · GW

How well do they serve each purpose? I'm given to understand Newton's Laws are highly useful in engineering. How do they compare with alternative means of producing status, like teaching everyone 'Ubik' and 'fnord?'

Comment by hairyfigment on Crisis of Faith · 2017-01-23T03:32:30.076Z · score: 0 (0 votes) · LW · GW

there seems to be a rather large gap where a single missionary, armed with nothing more than information and presumably a fairly persuasive tongue, can go into a large enough group of humans who have little or no previous knowledge of religion and end up persuading a number of them to join.

When do you believe this happened, aside from cases where "Jesus" was translated as "Buddha"? Missionaries today typically harass other Christians.

Comment by hairyfigment on "Flinching away from truth” is often about *protecting* the epistemology · 2017-01-16T18:53:26.058Z · score: 0 (0 votes) · LW · GW

Outcome? I was going to say that suboptimal could refer to a case where we don't know if you'll reach your goal, but we can show (by common assumptions, let's say) that the action has lower expected value than some other. "Irrational" does not have such a precise technical meaning, though we often use it for more extreme suboptimality.

Comment by hairyfigment on LessWrong 2.0 · 2017-01-15T19:19:56.227Z · score: 2 (2 votes) · LW · GW

Upvoted, but this seems to vary from person to person. You also forgot how italics and lists work here.

Comment by hairyfigment on [LINK] EA Has A Lying Problem · 2017-01-14T19:39:04.432Z · score: 0 (0 votes) · LW · GW

This is wholly irrelevant, because we've already caught Gleb lying many times. His comment sacrifices nothing, and in fact he's likely posting it to excuse his crimes (the smart money says he's lying about something in the process).

Your point does apply to the OP trying to smear her first example for practicing radical honesty. This is one of the points I tried to make earlier.

Comment by hairyfigment on [LINK] EA Has A Lying Problem · 2017-01-12T07:43:44.508Z · score: 1 (3 votes) · LW · GW

I do not think it's fine. I think you're poisoning the discourse and should stop doing it, as indeed should the blogger in your example if there isn't more to go on. Is your last sentence some kind of parody, or an actual defense of the reason our country is broken?

Comment by hairyfigment on [LINK] EA Has A Lying Problem · 2017-01-12T06:15:52.437Z · score: 1 (1 votes) · LW · GW

I'm talking here about the linked post. The author's first example shows the exact opposite of what she said she would show. She only gives one example of something that she called a pattern, so that's one person saying they should consider dishonesty and another person doing the opposite.

If you think there's a version of her argument that is not total crap, I suggest you write it or at least sketch it out.

Comment by hairyfigment on [LINK] EA Has A Lying Problem · 2017-01-12T01:11:46.905Z · score: 2 (2 votes) · LW · GW

Another note I forgot to add: the first quote, about criticism, sounds like Ben Todd being extremely open and honest regarding his motives.

Comment by hairyfigment on [LINK] EA Has A Lying Problem · 2017-01-12T01:04:45.312Z · score: 1 (1 votes) · LW · GW

She does eventually give an example of what she says she's talking about - one example from Facebook, when she claimed to be seeing a pattern in many statements. Before that she objects to the standard use of the English word "promise," in exactly the way we would expect from an autistic person who has no ability to understand normal humans. Of course this is also consistent with a dishonest writer trying to manipulate autistic readers for some reason. I assume she will welcome this criticism.

(Seriously, I objected to her Ra post because the last thing humanity needs is more demonology; but even I didn't expect her to urge "mistrusting Something that speaks through them," like they're actually the pawns of demons. "Something" is very wrong with this post.)

The presence of a charlatan like Gleb around EA is indeed disturbing. I seem to recall people suggesting they were slow to condemn him because EA people need data to believe anything, and lack any central authority who could declare him anathema.

Comment by hairyfigment on Double Crux — A Strategy for Resolving Disagreement · 2017-01-11T23:43:52.581Z · score: 1 (2 votes) · LW · GW

Well, it's not easy.

Comment by hairyfigment on Crisis of Faith · 2017-01-09T09:43:45.492Z · score: 1 (1 votes) · LW · GW

Your second point is clearly true. The first seems false; Christianity makes much more sense from a Greco-Roman perspective if Jesus was supposed to be a celestial being, not an eternal unchanging principle that was executed for treason. And the sibling comment leaves out the part about first-century Israelites wanting a way to replace the 'corrupt,' Roman-controlled, Temple cult of sacrifice with something like a sacrifice that Rome could never control.

Josephus saw the destruction of that Temple coming. For others to believe it would happen if they 'restored the purity of the religion' only requires the existence of some sensible zealots.

Comment by hairyfigment on The Proper Use of Humility · 2017-01-02T03:38:54.657Z · score: 0 (0 votes) · LW · GW

As defined in some places - for example, the Occam's Razor essay that Eliezer linked for you many comments ago - simplicity is not the same as fitting the evidence.

The official doctrine of the Trinity has probability zero because the Catholic Church has systematically ruled out any self-consistent interpretation (though if you ask, they'll probably tell you one or more of the heresies is right after all). So discussing its complexity does seem like a waste of time to me as well. But that's not true for all details of Catholicism or Christianity (if for some reason you want to talk religion). Perhaps some intelligent Christians could see that we reject the details of their beliefs for the same reason they reject the lyrics of "I Believe" from The Book of Mormon.

Comment by hairyfigment on Two Cult Koans · 2017-01-02T03:25:43.357Z · score: -1 (1 votes) · LW · GW

If Boy Scouts never chose a uniform, it would have been very hard for them to get their reputation for above-average conscientiousness and obedience to authority.

Especially the second one.

Comment by hairyfigment on Teaching an AI not to cheat? · 2016-12-30T02:39:31.507Z · score: 0 (0 votes) · LW · GW

Maybe the D&D example is unfairly biasing my reply, but giving humans wish spells without guidance is the opposite of what we want.

Comment by hairyfigment on Teaching an AI not to cheat? · 2016-12-30T02:37:47.038Z · score: 1 (1 votes) · LW · GW

The grandparent suggests that you need a separate solution to make your solution work. The claim seems to be that you can't solve FAI this way, because you'd need to have already solved the problem in order to make your idea stretch far enough.

Comment by hairyfigment on Teaching an AI not to cheat? · 2016-12-29T06:53:38.777Z · score: 0 (0 votes) · LW · GW

The first problem I see here is that cheating at D&D is exactly what we want the AI to do.

Comment by hairyfigment on Occam's Razor · 2016-12-28T03:09:49.520Z · score: 0 (0 votes) · LW · GW

Complexity, as defined in Solomonoff Induction, means program description - that is, code length in bits.

Sidenote: thank you for reminding me that Eliezer was talking about better versions of SI in 2007, before starting his quantum mechanics sequence.

Comment by hairyfigment on Ozy's Thoughts on CFAR's Mission Statement · 2016-12-14T19:29:06.652Z · score: 0 (0 votes) · LW · GW

Though I also want to point out that MIRI-style research seems like a very cheap intervention relative to global warming. And here I'm talking about the research they should be doing, if they had $10 million.

Comment by hairyfigment on Open thread, Dec. 12 - Dec. 18, 2016 · 2016-12-14T19:23:13.198Z · score: 0 (0 votes) · LW · GW

Not sure I'm defending the UBI, but: we already have enough food to feed everyone on Earth. Plainly social factors can interfere with this rosy prediction.

Comment by hairyfigment on Land war in Asia · 2016-12-10T03:02:01.843Z · score: 0 (0 votes) · LW · GW

The decision you're talking about was indeed fatal, but "most fatal?" Hitler tried to kill all of Albert Einstein's people, with the result that Einstein wrote a letter to the President explaining the idea of nuclear weapons.

Comment by hairyfigment on Land war in Asia · 2016-12-10T02:55:04.374Z · score: 0 (0 votes) · LW · GW

I had in mind some specific remarks by Stalin, but let's say Hitler's invasion had no such effect.

What makes you think he was paranoid before this? His mass-murders didn't stop him from dying in his bed at an advanced age, nor from forcing the Soviet people to defend him and his power from Germany. He could easily have enjoyed killing people. On what grounds would you call his behavior irrational?

Comment by hairyfigment on My problems with Formal Friendly Artificial Intelligence work · 2016-12-09T08:28:44.206Z · score: 0 (0 votes) · LW · GW

Should they be? It looks like people here would be receptive if you have an idea for a problem that doesn't just tell us what we already know. But it also looks to me like the winners of the tournament both approximated in a practical way the search through many proofs approach (LW writeup and discussion here.)

Comment by hairyfigment on My problems with Formal Friendly Artificial Intelligence work · 2016-12-08T11:35:42.137Z · score: 1 (1 votes) · LW · GW

We don't have scenarios where utility depends upon the amount of time taken to compute results.

What?

Comment by hairyfigment on My problems with Formal Friendly Artificial Intelligence work · 2016-12-08T11:29:54.497Z · score: 2 (2 votes) · LW · GW

If people start baking in TDT or UDT into the core of their AIs philosophy

I don't understand UDT, but TDT can look at the evidence and decide what the other AI actually does. It can even have a probability distribution over possible source codes and use that to estimate expected value. This gives the other AI strong incentive to look for ways to prove its honesty.

Comment by hairyfigment on Land war in Asia · 2016-12-08T11:16:39.983Z · score: 0 (0 votes) · LW · GW

the Germans were not crazy to think that they would eventually have to fight the Russians.

In my model they absolutely were, because evil and stupidity do not always go together. Stalin was a rational psychopath who kept every promise he made to a foreign power. (If he believed or cared about Communist ideology at all, he thought it rendered war against another major power redundant and foolish.) He thought Hitler was the same, to the point of not listening to anyone who said otherwise, and finding the truth shocked him into paranoia.

Comment by hairyfigment on Tsuyoku Naritai! (I Want To Become Stronger) · 2016-12-04T01:18:32.761Z · score: 0 (0 votes) · LW · GW

The Talmud from what little I know may be a poor example of this. In fact, last I checked the Torah came from a combination of contradictory texts, and tradition comes close to admitting this with the story of Ezra.

I think most people in ancient times held all sorts of beliefs about the world which we would call "literalist" if someone held them today, but they rarely if ever believed in the total accuracy of one source. They believed gods made the world because that seemed like a good explanation at the time. They may have believed in the efficacy of sacrifice, because why wouldn't you want sacrifices made to you?

Comment by hairyfigment on Debating concepts - What is the comparative? · 2016-12-01T23:55:49.016Z · score: 0 (0 votes) · LW · GW

Not only are you ignoring the fact that the speaker conflated different claims or positions, you just did it yourself with this word "cosmopolitanism."

"The comparative" can avoid certain rhetorical tricks that are harmful to real discussion, but your example is a more pernicious trick.