Posts

Good arguments against "cultural appropriation" 2018-12-18T17:23:52.900Z · score: 23 (11 votes)
Moving Factward 2018-11-29T05:54:28.877Z · score: 15 (8 votes)
Sam Harris and the Is–Ought Gap 2018-11-16T01:04:08.185Z · score: 91 (45 votes)
Intrinsic properties and Eliezer's metaethics 2017-08-29T23:26:53.144Z · score: 13 (7 votes)
Globally better means locally worse 2017-03-22T19:25:42.474Z · score: 3 (4 votes)
Buckets and memetic immune disorders 2017-01-03T23:51:50.442Z · score: 12 (13 votes)
Why is the A-Theory of Time Attractive? 2014-10-31T23:11:24.608Z · score: 6 (19 votes)
Rationality Quotes October 2014 2014-10-01T23:02:20.410Z · score: 4 (5 votes)
Link: How Community Feedback Shapes User Behavior 2014-09-17T13:49:35.217Z · score: 4 (5 votes)
Rationality Quotes June 2014 2014-06-01T20:32:02.500Z · score: 9 (12 votes)
What Bayesianism taught me 2013-08-12T06:59:48.120Z · score: 70 (72 votes)
[SEQ RERUN] Your Strength as a Rationalist 2011-07-07T23:46:53.568Z · score: 5 (6 votes)
An explanation of Aumann's agreement theorem 2011-07-07T06:22:27.174Z · score: 6 (7 votes)
[SEQ RERUN] The Apocalypse Bet 2011-07-06T17:27:41.921Z · score: 3 (4 votes)
[SEQ RERUN] You Can Face Reality 2011-07-05T15:25:05.711Z · score: 6 (7 votes)
[SEQ RERUN] The Virtue of Narrowness 2011-07-03T19:47:09.286Z · score: 4 (5 votes)
[SEQ RERUN] The Proper Use of Doubt 2011-07-01T23:54:19.634Z · score: 5 (6 votes)
[SEQ RERUN] Focus Your Uncertainty 2011-06-30T05:21:41.099Z · score: 6 (7 votes)
[SEQ RERUN] The Importance of Saying "Oops" 2011-06-29T05:25:57.563Z · score: 6 (7 votes)
[SEQ RERUN] Religion's Claim to be Non-Disprovable 2011-06-28T02:54:09.504Z · score: 5 (6 votes)
[SEQ RERUN] Belief as Attire 2011-06-27T01:51:13.316Z · score: 5 (6 votes)
[SEQ RERUN] Professing and Cheering 2011-06-26T03:59:08.693Z · score: 4 (5 votes)
[SEQ RERUN] Bayesian Judo 2011-06-25T05:09:29.811Z · score: 7 (8 votes)
[SEQ RERUN] Making Beliefs Pay Rent (in Anticipated Experiences) 2011-06-20T00:34:27.940Z · score: 9 (10 votes)
[SEQ RERUN] Two More Things to Unlearn from School 2011-06-19T02:46:51.239Z · score: 3 (4 votes)
[SEQ RERUN] Are Your Enemies Innately Evil? 2011-06-17T14:46:06.254Z · score: 4 (5 votes)
[SEQ RERUN] Correspondence Bias 2011-06-15T21:26:10.537Z · score: 5 (6 votes)
[SEQ RERUN] Risk-Free Bonds Aren't 2011-06-14T16:14:21.637Z · score: 4 (5 votes)
[SEQ RERUN] One Life Against the World 2011-06-12T01:38:39.413Z · score: 6 (7 votes)
[SEQ RERUN] Scope Insensitivity 2011-06-11T01:08:46.996Z · score: 6 (7 votes)
[SEQ RERUN] Priors as Mathematical Objects 2011-05-29T15:36:07.552Z · score: 3 (4 votes)
[SEQ RERUN] Superstimuli and the Collapse of Western Civilization 2011-05-12T16:42:12.745Z · score: 7 (7 votes)
[SEQ RERUN] Blue or Green on Regulation? 2011-05-10T18:31:46.512Z · score: 7 (8 votes)
[SEQ RERUN] The Scales of Justice, the Notebook of Rationality 2011-05-08T17:43:27.892Z · score: 7 (8 votes)
[SEQ RERUN] Burch's Law 2011-05-06T21:15:16.085Z · score: 5 (6 votes)
[SEQ RERUN] Policy Debates Should Not Appear One-Sided 2011-05-03T18:13:57.618Z · score: 4 (5 votes)
[SEQ RERUN] You Are Not Hiring the Top 1% 2011-05-02T18:50:25.517Z · score: 3 (4 votes)
[SEQ RERUN] Just Lose Hope Already 2011-05-01T21:08:42.324Z · score: 6 (7 votes)
[SEQ RERUN] Politics is the Mind-Killer 2011-04-29T21:26:52.662Z · score: 5 (6 votes)
[SEQ RERUN] Outside the Laboratory 2011-04-29T01:04:38.551Z · score: 6 (7 votes)
UDT agents as deontologists 2010-06-10T05:01:06.970Z · score: 8 (21 votes)

Comments

Comment by tyrrell_mcallister on Coherent decisions imply consistent utilities · 2019-07-11T19:29:41.229Z · score: 2 (1 votes) · LW · GW

Typo: "And that's why the thingies you multiply probabilities by—the thingies that you use to weight uncertain outcomes in your imagination,"

Here, "probabilities" should be "utilities".

Comment by tyrrell_mcallister on Some Thoughts on My Psychiatry Practice · 2019-01-19T01:02:47.960Z · score: 8 (4 votes) · LW · GW

Trying the pill still makes you the kind of person who tries pills. Not trying really does avoid that.

Comment by tyrrell_mcallister on Good arguments against "cultural appropriation" · 2019-01-16T23:48:18.554Z · score: 2 (1 votes) · LW · GW

You may be interpreting "signalling" in a more specific way than I intended. You might be thinking of the kind of signalling that is largely restricted to status jockeying in zero-sum status games.

But I was using "signaling tool" in a very general sense. I just mean that you can use the signaling tool to convey information, and that you and your intended recipients have common knowledge about what your signal means. In that way, it's basically just a piece of language.

As with any piece of language, the fact that it signals something does place restrictions on what you can do.

For example, you can't yell "FIRE!" unless you are prepared to deal with certain consequences. But if the utterance "FIRE!" had no meaning, you would be freer, in a sense, to say it. If the mood struck you, you could burst out with a loud shout of "FIRE!" without causing a big commotion and making a bunch of people really angry at you.

But you would also lack a convenient tool that reliably brings help when you need it. This is a case where I think that the value of the signal heavily outweighs the restrictions that the signal's existence places on your actions.

Comment by tyrrell_mcallister on Good arguments against "cultural appropriation" · 2018-12-23T01:24:23.142Z · score: 2 (1 votes) · LW · GW

Good points.

Comment by tyrrell_mcallister on Good arguments against "cultural appropriation" · 2018-12-18T22:30:19.211Z · score: 10 (5 votes) · LW · GW
I'm having a hard time separating this from the 'offense' argument that you're not including.

I agree that part of offense is just "what it feels like on the inside to anticipate diminished status".

Analogously, part of the pain of getting hit by a hammer is just "what it feels like on the inside to get hit by a hammer."

However, in both cases, neither the pain nor the offense is just passive internal information about an objective external state of affairs. They include such information, but they are more than that. In particular, in both cases, they are also what it feels like to execute a program designed by evolution to change the situation.

Pain, for example, is an inducement to stop any additional hammer blows and to see to the wounds already inflicted. More generally, pain is part of an active program that is interacting with the world, planning responses, anticipating reactions to those responses, and so on. And likewise with offense.

The premise of my distinction between "offense" and "diminished status" is this. I maintain that we can conceptually separate the initial and unavoidable diminished status from the potential future diminished status.

The potential future diminished status depends on how the offendee responds. The emotion of offense is heavily wrapped up in this potential future and in what kinds of responses will influence that future. For that reason, offense necessarily involves the kinds of recursive issues that Katja explores.

In the end, these recursive issues will have to be considered. (They are real, so they should be reflected in our theory in the end.) But it seems like it should be possible to see what initial harm, if any, occurs before the recursion kicks in.

Comment by tyrrell_mcallister on Good arguments against "cultural appropriation" · 2018-12-18T21:59:00.231Z · score: 3 (2 votes) · LW · GW

In the examples that occur to me, both sides agree that mocking the culture in question would be bad. They just disagree about whether the person accused of CA is doing that.

Do you have in mind a case in which the accused party defended themselves by saying that the appropriated culture should be mocked?

That seems like a different kind of dispute that follows a different rhetorical script, on both sides. For example, critics of Islam will be accused of Islamophobia, not cultural appropriation. And people accused of CA are more likely to defend themselves by saying that they're honoring the culture. They will not embrace the claim that they are mocking it.

I'm not contesting the claim that mockery can be good in some cases. But that point isn't at the crux of the arguments over cultural appropriation that I've seen. Disputes where the goodness of mockery is at the crux will not be of the kind that I'm considering here.

Comment by tyrrell_mcallister on Moving Factward · 2018-11-29T14:28:26.110Z · score: 5 (3 votes) · LW · GW

I am not asserting that those aspects of "westward" apply to "factward".

Analogies typically assert a similarity between only some, not all, aspects of the two analogous situations. But maybe those aspects of "westward" are so salient that they interfere with the analogy.

Comment by tyrrell_mcallister on Sam Harris and the Is–Ought Gap · 2018-11-29T07:00:56.386Z · score: 2 (1 votes) · LW · GW
suppose that I agree with Sam Harris that ~all humans find the same set of objective facts to be morally motivating. But then it turns out that we disagree on just which facts those are! How do we resolve this disagreement? We can hardly appeal to objective facts, to do so…

I don't follow. Sam would say (and I would agree) that which facts which humans find motivating (in the limit of ideal reflection, etc.) is an empirical question. With regard to each human, it is a scientific question about that human's motivational architecture.

Comment by tyrrell_mcallister on Sam Harris and the Is–Ought Gap · 2018-11-29T05:49:00.502Z · score: 2 (1 votes) · LW · GW

It's true that a moral realist could always bridge the is–ought gap by the simple expedient of converting every statement of the form "I ought to X" to "Objectively and factually, X is what I ought to do".

But that is not enough for Sam's purposes. It's not enough for him that every moral claim is or is not the case. It's not enough that moral claims are matters of fact. He wants them to be matters of scientific fact.

On my reading, what he means by that is the following: When you are pursuing a moral inquiry, you are already a moral agent who finds certain objective and scientifically determinable facts to be motivating (inducing of pursuit or avoidance). You are, as Eliezer puts it, "created already in motion". Your inquiry, therefore, is properly restricted just to determining which scientific "is" statements are true and which are false. In that sense, moral inquiry reduces entirely to matters of scientific fact. This is the dialectical-argumentation point of view.

But his interlocutors misread him to be saying that every scientifically competent agent should find the same objective facts to be motivating. In other words, all such agents should feel compelled to act according to the same moral axioms. This is what "bridging the is–ought gap" would mean if you confined yourself to the logical-argumentation framework. But it's not what Sam is claiming to have shown.

Comment by tyrrell_mcallister on Sam Harris and the Is–Ought Gap · 2018-11-19T06:02:40.342Z · score: 4 (2 votes) · LW · GW

If you're trying to convince me to do some thing X, then you must want me to do X, too. So we must be at least that aligned.

We don't have to be aligned in every regard. And you needn't yourself value every consequence of X that you hold up to me to entice me to X. But you do have to understand me well enough to know that I find that consequence enticing.

But that seems to me to be both plausible and enough to support the kind of dialectical moral argumentation that I'm talking about.

Comment by tyrrell_mcallister on Sam Harris and the Is–Ought Gap · 2018-11-19T05:53:53.114Z · score: 19 (7 votes) · LW · GW

Thank you for the link to the transcript. Here are the parts that I read in that way (emphasis added):

[Sam:] So it seems that goal-directed behavior is implicit (or even explicit) in this definition of intelligence. And so whatever intelligence is, it is inseparable from the kinds of behavior in the world that result in the fulfillment of goals. So we’re talking about agents that can do things; and once you see that, then it becomes pretty clear that if we build systems that harbor primary goals—you know, there are cartoon examples here like making paperclips—these are not systems that will spontaneously decide that they could be doing more enlightened things than (say) making paperclips.
This moves to the question of how deeply unfamiliar artificial intelligence might be, because there are no natural goals that will arrive in these systems apart from the ones we put in there. And we have common-sense intuitions that make it very difficult for us to think about how strange an artificial intelligence could be. Even one that becomes more and more competent to meet its goals.

[...]

[Sam:] One thing this [paperclip-maximizer] thought experiment does: it also cuts against the assumption that [...] we’re not going to build something that is superhuman in competence that could be moving along some path that’s as incompatible with our wellbeing as turning every spare atom on Earth into a paperclip.

A bit later, Sam does deny that facts and values are "orthogonal" to each other, but he does so in the context of human minds ("we" ... "us") in particular:

Sam: So generally speaking, when we say that some set of concerns is orthogonal to another, it’s just that there’s no direct implication from one to the other. Some people think that facts and values are orthogonal to one another. So we can have all the facts there are to know, but that wouldn’t tell us what is good. What is good has to be pursued in some other domain. I don’t happen to agree with that, as you know, but that’s an example.
Eliezer: I don’t technically agree with it either. What I would say is that the facts are not motivating. “You can know all there is to know about what is good, and still make paperclips,” is the way I would phrase that.
Sam: I wasn’t connecting that example to the present conversation, but yeah.

So, Sam and Eliezer agree that humans and paperclip maximizers both learn what "good" means (to humans) from facts alone. They agree that humans are motivated by this category of "good" to pursue those things (world states or experiences or whatever) that are "good" in this sense. Furthermore, that a thing X is in this "good" category is an "is" statement. That is, there's a particular bundle of exclusively "is" statements that captures just the qualities of a thing that are necessary and sufficient for it to be "good" in the human sense of the word.

More to my point, Sam goes on to agree, furthermore, that a superintelligent paperclip maximizer will not be motivated by this notion of "good". It will be able to classify things correctly as "good" in the human sense. But no amount of additional scientific knowledge will induce it to be motivated by this knowledge to pursue good things.

Sam does later say that "There are places where intelligence does converge with other kinds of value-laden qualities of a mind":

[Sam:] I do think there’s certain goals and certain things that we may become smarter and smarter with respect to, like human wellbeing. These are places where intelligence does converge with other kinds of value-laden qualities of a mind, but generally speaking, they can be kept apart for a very long time. So if you’re just talking about an ability to turn matter into useful objects or extract energy from the environment to do the same, this can be pursued with the purpose of tiling the world with paperclips, or not. And it just seems like there’s no law of nature that would prevent an intelligent system from doing that.

Here I read him again to be saying that, in some contexts, such as in the case of humans and human-descendant minds, intelligence should converge on morality. However, no law of nature guarantees any such convergence for an arbitrary intelligent system, such as a paperclip maximizer.

This quote might make my point in the most direct way:

[Sam:] For instance, I think the is-ought distinction is ultimately specious, and this is something that I’ve argued about when I talk about morality and values and the connection to facts. But I can still grant that it is logically possible (and I would certainly imagine physically possible) to have a system that has a utility function that is sufficiently strange that scaling up its intelligence doesn’t get you values that we would recognize as good. It certainly doesn’t guarantee values that are compatible with our wellbeing. Whether “paperclip maximizer” is too specialized a case to motivate this conversation, there’s certainly something that we could fail to put into a superhuman AI that we really would want to put in so as to make it aligned with us.

A bit further on, Sam again describes how, in his view, "ought" evaporates into "is" statements under a consequentialist analysis. His argument is consistent with my "dialectical" reading. He also reiterates his agreement that sufficient intelligence alone isn't enough to guarantee convergence on morality:

[Sam:] This is my claim: anything that you can tell me is a moral principle that is a matter of oughts and shoulds and not otherwise susceptible to a consequentialist analysis, I feel I can translate that back into a consequentialist way of speaking about facts. These are just “is” questions, just what actually happens to all the relevant minds, without remainder, and I’ve yet to find an example of somebody giving me a real moral concern that wasn’t at bottom a matter of the actual or possible consequences on conscious creatures somewhere in our light cone.
Eliezer: But that’s the sort of thing that you are built to care about. It is a fact about the kind of mind you are that, presented with these answers to these “is” questions, it hooks up to your motor output, it can cause your fingers to move, your lips to move. And a paperclip maximizer is built so as to respond to “is” questions about paperclips, not about what is right and what is good and the greatest flourishing of sentient beings and so on.
Sam: Exactly. I can well imagine that such minds could exist ...
Comment by tyrrell_mcallister on Sam Harris and the Is–Ought Gap · 2018-11-18T04:38:11.711Z · score: 4 (2 votes) · LW · GW

Sam Harris grants the claim that you find objectionable (see his podcast conversation with Yudkowsky). So it’s not the crux of the disagreement that this post is about.

Comment by tyrrell_mcallister on Sam Harris and the Is–Ought Gap · 2018-11-16T22:14:19.390Z · score: 4 (3 votes) · LW · GW

It certainly shows that Eliezer understands the distinction that I'm highlighting.

Comment by tyrrell_mcallister on Sam Harris and the Is–Ought Gap · 2018-11-16T19:57:05.926Z · score: 3 (2 votes) · LW · GW

Again you may object that this is circular reasoning and I am assuming ought right in the statement 1. But it would be like saying that I am assuming to have two apples. Sure, I am assuming that. And what is the problem exactly?

The difference from Sean Carroll's point of view (logical argumentation) is that not every scientifically competent agent will find this notion of "ought" compelling. (Really, only chess-playing programs would, if "ought" is taken in a terminal-value sense.) Whereas, such an agent's scientific competence would lead it to find compelling the axiom that you have two apples.

And I think that Sam Harris would agree with that, so far as it goes. But he would deny that this keeps him from reducing "ought" statements to purely scientific "is" statements, because he's taking the dialectical-argumentation point of view, not the logical-argumentation one. At any rate, Harris understands that a superintelligent AI might not be bothered by a universe consisting purely of extreme suffering. This was clear from his conversation with Eliezer Yudkowsky.

Comment by tyrrell_mcallister on Sam Harris and the Is–Ought Gap · 2018-11-16T18:34:41.089Z · score: 5 (3 votes) · LW · GW

This post isn't arguing for any particular moral point of view over another, so you'll get no debate from me :).

Just to elaborate on the point of the post, though:

From the logical-argumentation point of view, something like the unpacking that you describe is necessary, because a moral argument has to conclude with an "ought" statement, in which "ought" appears explicitly, so the "ought" has to get introduced somewhere along the way, either in the original axioms or as a subsequent definition.

From the dialectical-argumentation point of view, this unpacking of "ought" is unnecessary, at least within the moral argument itself.

Granted, the persuader will need to know what kinds of "is" facts actually persuade you. So the persuader will have to know that "ought" means whatever it means to you. But the persuader won't use the word "ought" in the argument, except in some non-essential and eliminable way.

It's not like the persuader should have to say, "Do X, because doing X will bring about world W, and you assign high moral weight or utility to W."

Instead, the persuader will just say, "Doing X will bring about world W". That's purely an "is" statement. Your internal process of moral evaluation does the rest. But that process has to happen inside of you. It shouldn't—indeed, it can't—be carried out somehow within the statements of the argument itself.

Comment by tyrrell_mcallister on Sam Harris and the Is–Ought Gap · 2018-11-16T15:10:02.760Z · score: 3 (2 votes) · LW · GW
First, can you clarify what you mean by rational persuasion, if you are distinguishing it from logical proof?

I don't mean to distinguish it from logical proof in the everyday sense of that term. Rational persuasion can be as logically rigorous as the circumstances require. What I'm distinguishing "rational persuasion" from is a whole model of moral argumentation that I'm calling "logical argumentation" for the purposes of this post.

If you take the model of logical argumentation as your ideal, then you act as if a "perfect" moral argument could be embedded, from beginning to end, from axiomatic assumptions to "ought"-laden conclusions, as a formal proof in a formal logical system.

On the other hand, if you're working from a model of dialectical argumentation, then you act as if the natural endpoint is to persuade a rational agent to act. This doesn't mean that any one argument has to work for all agents. Harris, for example, is interested in making arguments only to agents who, in the limit of ideal reflection, acknowledge that a universe consisting exclusively of extreme suffering would be bad. However, you may think that you could still find arguments that would be persuasive (in the limit of ideal reflection) to nearly all humans.

Do you mean that we can skip arguing for some premises because we can rely on our intuition to identify them as already shared? Or do you mean that we need not aim for deductive certainty--a lower confidence level is acceptable? Or something else?

For the purposes of this post, I'm leaving much of this open. I'm just trying to describe how people are guided by various vague ideals about what ideal moral argumentation "should be".

But you're right that the word "rational" is doing some work here. Roughly, let's say that you're a rational agent if you act effectively to bring the world into states that you prefer. On this ideal, to decide how to act, you just need information about the world. Your own preferences do the work of using that information to evaluate plans of action. However, you aren't omniscient, so you benefit from hearing information from other people and even from having them draw out some of its implication for you. So you find value in participating in conversations about what to do. Nonetheless, you aren't affected by rhetorical fireworks, and you don't get overwhelmed by appeals to unreflective emotion (emotional impulses that you would come to regret on reflection). You're unaffected by the superficial features of who is telling you the information and how. You're just interested in how the world actually is and what you can do about it.

Do you need to have "deductive certainty" in the information that you use? Sometimes you do, but often you don't. You like it when you can get it, but you don't make a fetish of it. If you can see that it would be wasteful to spend more time on eking out a bit more certainty, then you won't do it.

"Rational persuasion" is the kind of persuasion that works on an agent like that. This is the rough idea.

Comment by tyrrell_mcallister on Against Modest Epistemology · 2017-11-15T19:54:20.037Z · score: 16 (7 votes) · LW · GW

Modest epistemology is slippery. You put forward an abstract formulation (Rule M), but "modestists" will probably not identify with it. Endorsing such an abstract view would conflict with modesty itself. Only a hedgehog would put any confidence in such a general principle, so divorced from any foxy particulars.

That's why any real-world modestist will advocate modesty only in particular contexts. That's why your friend was happy to say "Just no" about belief in God. God was not among the contexts where he thought that his being modest was warranted.

Consistent modestists don't advocate modesty "in general". They just think that, for certain people, including you and them, self-doubt is especially warranted when considering certain specific kinds of questions. Or they'll think that, for certain people, including you and them, trusting certain experts over one's own first-order reasoning is especially warranted. Now, you could ask them how their modesty could allow them to be so confident in their conclusion that modesty is warranted in just those cases. But they can consistently reply that, for people like them, that conclusion is not among the kinds of belief such that being modest is warranted.

The first several chapters of your book are very much on point, here. You're making the case that modesty is not warranted in certain cases — specific cases where your modest reader might have thought that it was (central bank policies and medical treatment). And you're providing powerful general methods for identifying such cases.

But this chapter, which argues against modesty in general, has to miss its mark. It might be pursuasive to modest hedgehogs who have universalized their modesty. But modest hedgehogs are almost a contradiction in terms.

Comment by tyrrell_mcallister on Intrinsic properties and Eliezer's metaethics · 2017-08-30T22:27:54.190Z · score: 0 (0 votes) · LW · GW

I can think of two possibilities:

[1] that morality is based on rational thought as expressed through language

[2] that morality has a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition

Closer to [2]. Does the analogy in Section 2 make sense to you? That would be my starting point for trying to explain further.

Comment by tyrrell_mcallister on Intrinsic properties and Eliezer's metaethics · 2017-08-30T18:24:40.362Z · score: 0 (0 votes) · LW · GW

∆-ness does not depend on the point of observation. If you like, just stipulate that you always view the configuration from a point outside the affine span of the configuration but on the line perpendicular to the affine span and passing through the configuration's barycenter. Then regular triangles, and only regular triangles, will project to regular triangles on your 2-dimensional display.

Comment by tyrrell_mcallister on Thought experiment: coarse-grained VR utopia · 2017-06-15T18:17:08.519Z · score: 1 (1 votes) · LW · GW

An interesting thought-experiment. But I don't follow this part:

So in theory we could hand it off to human philosophers or some other human-based procedure, thus dealing with "complexity of value" without much risk.

The complexity of value has to do with how the border delineating good outcomes from all possible outcomes cannot be specified in a compact way. Granted, the space of possible polygon arrangements is smaller than the space of possible atom arrangements. That does make the space of possible outcomes relatively more manageable in your VR world. But the space of outcomes is still Vast. It seems Vast enough that the border separating good from bad is still complex beyond our capacity to specify.

Comment by tyrrell_mcallister on The Rationalistsphere and the Less Wrong wiki · 2017-06-15T16:30:53.232Z · score: 4 (4 votes) · LW · GW

There was a historical attempt to summerise all major Less Wrong posts, an interesting but incomplete project. It was also approach without a usefully normalised approach. Ideally, every article would have its own page which could be heavily tagged up with metadata such a themes, importance, length, quality, author and such. Is this the goal of the wiki?

I wrote a dozen or two of those summaries. My goal was to write a highly distilled version of the post itself.

I aimed for summaries that were about four or five sentences long. Very roughly speaking, I tried to have a sentence for each principle thesis, and a sentence for each supporting argument. As a self-imposed constraint, I kept my summaries under 70 words.

For me, the summary should capture just the logical structure supporting the final take-away of the post, while losing all the anecdotes, illustrative examples, tribal signals, pseudo-dialectical back-and-forth, and discursive meanderings in the original.

Comment by tyrrell_mcallister on Globally better means locally worse · 2017-03-22T23:24:43.543Z · score: 1 (1 votes) · LW · GW

Even when cars were new they couldn't be overbuilt the way buildings were in prehistory because they still had to be able to move themselves around.

Which is interesting corroboration in light of CronoDAS's comment that cars have been getting more durable, not less.

Comment by tyrrell_mcallister on Globally better means locally worse · 2017-03-22T23:18:22.552Z · score: 1 (1 votes) · LW · GW

And cars are significantly more durable

That is an important counter-weight to the claims in the article I linked to.

ETA: Though maybe it's actually consistent in light of dogiv's observation that there were always limits on how much you could overbuild cars.

Comment by tyrrell_mcallister on Globally better means locally worse · 2017-03-22T23:16:15.028Z · score: 2 (2 votes) · LW · GW

Planned obsolescence alone doesn't explain the change over time of this phenomenon. It's a static explanation, one which applies equally well to every era, unless something more is said. So the question becomes, Why are manufacturers planning for sooner obsolescence now than they did in the past?

Likewise, "worse materials cost less" is always true. It's a static fact, so it can't explain the observed dynamic phenomenon by itself. Or, at least, you need to add some additional data, like, "materials are available now that are worse than what used to be available". That might explain something. It would be another example of things being globally better in a perverse sense (more options = better).

Comment by tyrrell_mcallister on Allegory On AI Risk, Game Theory, and Mithril · 2017-02-21T22:23:56.500Z · score: 3 (4 votes) · LW · GW

Totally unrealistic.

Thorin was never in a position to hire mirthril miners. He had sufficient capital for only a very brief time before dying at the Battle of Five Armies.

Comment by tyrrell_mcallister on Buckets and memetic immune disorders · 2017-01-04T16:31:22.865Z · score: 3 (3 votes) · LW · GW

You are emphasizing the truth-values at the nodes of the belief network ("check back to Q and P"). That is important. After all, in the end, you do want to have the right truth-values in the buckets.

But there are also structural questions about the underlying graph. Which edges should connect the nodes and, perhaps more deeply, which nodes should be there in the first place? When should new nodes be created? These are the questions addressed by Phil's and Anna's posts.

Comment by tyrrell_mcallister on The challenge of writing Utopia · 2017-01-04T00:32:31.382Z · score: 1 (1 votes) · LW · GW

An enticing advert.

Possible typo: "life it out".

Comment by tyrrell_mcallister on Rationality Quotes January - March 2017 · 2017-01-04T00:23:11.519Z · score: 3 (3 votes) · LW · GW

Of course, when calling people idiots for not agreeing with material that is called crackpot, you had better be careful, because if you are not right about the material, if it is crackpot, you are gone for good.

But you aren't "gone for good". You will have your own tribe of believers who will still support you. Before they had been called "fuckwits" they might have deserted you when the evidence didn't go your way. But they're not going to desert you now, not when doing so would be tantamount to admitting that they were fuckwits all along.

Comment by tyrrell_mcallister on Rationality Quotes January - March 2017 · 2017-01-04T00:09:57.288Z · score: 0 (0 votes) · LW · GW

If someone who wanted to learn to dance were to say: For centuries, one generation after the other has learned the positions, and it is high time that I take advantage of this and promptly begin with the quadrille—people would presumably laugh a little at him, but in the world of spirit [i.e., development of one's soul] this is [thought to be] very plausible. What, then, is education? I believed it is the course the individual goes through in order to catch up with himself, and the person who will not go through this course is not much helped by being born in the most enlightened age.

Søren Kierkegaard, Fear and Trembling, III 96 (trans. H. V. Hong and E. H. Hong). Annotations are mine.

Comment by tyrrell_mcallister on Sample means, how do they work? · 2016-12-07T18:30:47.279Z · score: 0 (0 votes) · LW · GW

[A]ctually a 95% confidence interval is an interval generated by a process, where the process has a 95% chance of generating a confidence interval that contains the true mean.

Is it incorrect for a Bayesian to gloss this as follow?

Given (only) that this CI was generated by process X with input 0.95, this CI has a 95% chance of containing the true mean.

I could imagine a frequentist being uncomfortable with talk of the "chance" that the true mean (a certain fixed number) is between two other fixed numbers. "The true mean either is or is not in the CI. There's no chance about it." But is there a deeper reason why a Bayesian would also object to that formulation?

Comment by tyrrell_mcallister on The cup-holder paradox · 2016-07-01T18:40:16.860Z · score: 0 (0 votes) · LW · GW

Smart phones are primarily pocket-sized PCs. Many of their most-attractive features could be developed only with strong expertise in computer and computer-interface design. Apple was world-class in these areas. Granted, the additional feature of being a phone was outside of Apple's wheelhouse. Nonetheless, Apple could contribute strong expertise to all but one of the features in the sum

(features of a pocket-sized PC) + (the feature of being a phone).

Somehow, this one remaining feature (phoning) got built into the name "smart phone". But the success of the iPhone is due to how well the other features were implemented. It turned out that being a phone could be done sufficiently well without expertise in building phones, given strong expertise in building pocket-sized PCs.

In general terms, Apple identified an X (phones) that could be improved by adding Y (features of PCs). They set themselves to making X+Y. Crucially, Y was something in which Apple already had tremendous expertise. True, the PC features would have to be constrained by the requirement of being a phone. (Otherwise, you get this.) But the hardest part of that is miniaturization, and Apple already had expertise in this, too. So, Apple had expertise in Y and in a major part of combining X and Y.

In other words, this was not a case of a non-expert beating experts at their own game. It was a case of a Y-expert beating the X-experts (or Xperts, if you will) at making X+Y.

On the other hand, PhilGoetz identified an X (cars) that could be improved by adding Y (good cup-holders). In contrast to Apple's case, Phil displays no expertise in Y at all. In particular, he displays no expertise at the hardest part of combining X and Y, which getting the cup-holder to fit in the car without getting in the way of anything else more important.

If Phil turned out to be right, it really would be a case of a non-expert beating the experts. So it would be much more surprising than Apple's beating Nokia.

Comment by tyrrell_mcallister on The Sally-Anne fallacy · 2016-04-12T17:12:26.558Z · score: 15 (15 votes) · LW · GW

A special case of this fallacy that you often see is

Your Axioms (+ My Axioms) yield a bald contradiction. Therefore, your position isn't even coherent!

This is a special case of the fallacy because the charge of self-contradiction could stick only if the accused person really subscribed to both Your Axioms and My Axioms. But this is only plausible because of an implicit argument: "My Axioms are true, so obviously the accused believes them. The accused just hasn't noticed the blatant contradiction that results."

Comment by tyrrell_mcallister on Consider having sparse insides · 2016-04-03T15:00:41.601Z · score: 3 (3 votes) · LW · GW

I think that this problem is fixed by reducing your identity even further:

"I am a person who aims to find the right and good way for me to be, and my goal is to figure out how to make myself that way."

This might seem tautological and vacuous. But living up to it means actually forming hypotheses about what the good way to be is, and then testing those hypotheses. I'm confident that "being effective" is part of the good way to be. But, as you point out, effectiveness alone surely isn't enough. Effectively doing good things, not bad things, makes all the difference.

At any rate, effectiveness itself is only a corollary of the ultimate goal, which is to be good. As a mere corollary, effectiveness does not endanger my recognition of other aspects of being good, such as keeping promises and maintaining a certain kind of loyalty to my local group.

The upshot, in my view, is that AnnaSalamon's approach ultimately converges on virtue ethics.

Comment by tyrrell_mcallister on Rationality Quotes Thread January 2016 · 2016-01-26T17:16:12.410Z · score: 1 (1 votes) · LW · GW

Why is this being downvoted (apart from misspelling the name)? I take the quote to be a version of "If it's stupid and works, it's not stupid."

Comment by tyrrell_mcallister on Rationality Quotes Thread January 2016 · 2016-01-03T23:13:02.976Z · score: 4 (6 votes) · LW · GW

Experience has shown that it is by no means difficult for philosophy to begin. Far from it. It begins with nothing, and consequently can always begin. But the difficulty, both for philosophy and for philosophers, is to stop.

Søren Kierkegaard, Either/Or, vol. 1 (trans. Swenson & Swenson).

Comment by tyrrell_mcallister on Is semiotics bullshit? · 2015-09-03T22:20:59.167Z · score: 0 (0 votes) · LW · GW

Does the disagreement, whatever it is, have any more impact on anything outside itself than semiotics does?

I can't say how it compares to semiotics because I don't know that field or its history.

If you're just asking whether foundations-of-math questions have had any impact outside of themselves, then the answer is definitely Yes.

For example, arguments about the foundations of mathematics led to developments in logic and automated theorem proving. Gödel worked out his incompleteness theorems within the context of Russell and Whitehead's Principia Mathematica. One of the main purposes of PM was to defend the logicist thesis that mathematical claims are just logical tautologies concerning purely logical concepts. Also, PM is the first major contribution that I know of to the study of Type Theory, which in turn is central in automated theorem proving.

Also, if you're trying to assess whether you believe in the Tegmark IV multiverse, which says that everything is math, then what you think math is is probably going to play some part in that assessment. Maybe that is just a case of one pragmatically-pointless question's bearing on another, but there it is.

Comment by tyrrell_mcallister on Is semiotics bullshit? · 2015-09-03T18:57:29.231Z · score: 0 (0 votes) · LW · GW

If it meant something, semioticians could take actual sentences, and then show how the two opposing views provide different interpretations of those sentences

Is that fair?

Everyone agrees that 2+2=4, but people disagree about what that statement is about. Within the foundations of mathematics, logicists and formalists can have a substantive disagreement even while agreeing on the truth-value of every particular mathematical statement.

Analogously, couldn't semioticians agree about the interpretation of every text, but disagree about the nature of the relationship between the text and its correct interpretation? Granted that X is the correct interpretation of Y, what exactly is it about X and Y that makes this the case? Or is there some third thing Z that makes X the correct interpretation of Y? Or is Z not a thing in its own right, but rather a relation among things? And, if so, what is the nature of that relation? Aren't those the kinds of questions that semioticians disagree about?

Comment by tyrrell_mcallister on What Bayesianism taught me · 2015-08-24T14:33:50.162Z · score: 0 (0 votes) · LW · GW

No, I don't think so. But I'm not sure how to elaborate without knowing why you thought that.

Comment by tyrrell_mcallister on An overview of the mental model theory · 2015-08-19T03:16:40.901Z · score: 1 (1 votes) · LW · GW

Last I checked, your edits haven't changed which answer is correct in your scenario. As you've explained, the Ace is impossible given your set-up.

(By the way, I thought that the earliest version of your wording was perfectly adequate, provided that the reader was accustomed to puzzles given in a "propositional" form. Otherwise, I expect, the reader will naturally assume something like the "algorithmic" scenario that I've been describing.)

In my scenario, the information given is not about which propositions are true about the outcome, but rather about which algorithms are controlling the outcome.

To highlight the difference, let me flesh out my story.

Let K be the set of card-hands that contain at least one King, let A be the set of card-hands that contain at least one Ace, and let Q be the set of card-hands that contain at least one Queen.

I'm programming the card-dealing robot. I've prepared two different algorithms, either of which could be used by the robot:

  • Algorithm 1: Choose a hand uniformly at random from KA, and then deal that hand.

  • Algorithm 2: Choose a hand uniformly at random from QA, and then deal that hand.

These are two different algorithms. If the robot is programmed with one of them, it cannot be programmed with the other. That is, the algorithms are mutually exclusive. Moreover, I am going to use one or the other of them. These two algorithms exhaust all of the possibilities.

In other words, of the two algorithm-descriptions above, exactly one of them will truthfully describe the robot's actual algorithm.

I flip a coin to determine which algorithm will control the robot. After the coin flip, I program the robot accordingly, supply it with cards, and bring you to the table with the robot.

You know all of the above.

Now the robot deals you a hand, face down. Based on what you know, which is more probable: that the hand contains a King, or that the hand contains an Ace?

Comment by tyrrell_mcallister on An overview of the mental model theory · 2015-08-18T23:49:49.159Z · score: 0 (0 votes) · LW · GW

Ace is not more probable.

Ace is more probable in the scenario that I described.

Of course, as you say, Ace is impossible in the scenario that you described (under its intended reading). The scenario that I described is a different one, one in which Ace is most probable. Nonetheless, I expect that someone not trained to do otherwise would likely misinterpret your original scenario as equivalent to mine. Thus, their wrong answer would, in that sense, be the right answer to the wrong question.

Comment by tyrrell_mcallister on An overview of the mental model theory · 2015-08-17T16:42:47.341Z · score: 6 (8 votes) · LW · GW

I'd guess that getting this question "correct" almost requires having been trained to parse the problem in a certain formal way — namely, purely in terms of propositional logic.

Otherwise, a perfectly reasonable parsing of the problem would be equivalent to the following:

Before you stands a card-dealing robot, which has just been programmed to deal a hand of cards. Exactly one of the following statements is true of the robot's hand-dealing algorithm:

  • The algorithm chooses from among only those hands that contain either a king or an ace (or both).
  • The algorithm chooses from among only those hands that contain either a queen or an ace (or both).

The robot now deals a hand. Which is more probable: the hand contains a King or the hand contains an Ace?

On this reading, Ace is most probable.

Indeed, this "algorithmic" reading seems like the more natural one if you're used to trying to model the world as running according to some algorithm — that is, if, for you, "learning about the world" means "learning more about the algorithm that runs the word".

The propositional-logic reading (the one endorsed by the OP) might be more natural if, for you, "learning about the world" means "learning more about the complicated conjunction-and-disjunction of propositions that precisely carves out the actual world from among the possible worlds."

Comment by tyrrell_mcallister on Book Review: Naive Set Theory (MIRI research guide) · 2015-08-15T15:24:01.277Z · score: 4 (4 votes) · LW · GW

There is a convention according to which a one-to-one function is injective, while a one-to-one correspondence is an injective function that is also surjective, ie, a bijection. (I don't know whether Halmos uses this convention.)

Comment by tyrrell_mcallister on Book Review: Naive Set Theory (MIRI research guide) · 2015-08-15T15:15:45.598Z · score: 4 (4 votes) · LW · GW

In general, reading about the same subject from a different author is a great way to learn and retain the material better. This is true even if neither author is objectively "better" than the other. Something about recognizing the same underlying concept expressed in different words helps to fix that concept in the mind.

It's possible to exploit this phenomenon even when you have only one text to work with. One trick I use when working through a math text is to willfully use different notation in my notes next to the text. Using a different notation forces me to make sure that I'm really following the details of the argument. Expressing the same logic in different symbols makes it easier to see through those symbols to the underlying logic.

Comment by tyrrell_mcallister on Truth and the Liar Paradox · 2015-02-22T23:21:17.130Z · score: 1 (1 votes) · LW · GW

That's not really a problem with Prior's resolution. Rather, it's a different problem with self-reference, which appears whether we adopt Prior's resolution or not.

Compare: "P" and "P and P" are usually equivalent. But

"This sentence has five words." and "This sentence has five words and this sentence has five words."

don't have the same truth value. The problem seems to be that the meaning of "this sentence" isn't the same in the two ostensibly equivalent sentences. Whatever your favorite solution of this problem is, it seems that Prior could just graft that solution onto his own.

Prior's solution to the liar paradox needn't solve all paradoxes of self-references. As long as his solution is compatible with other solutions to other paradoxes, Prior has still contributed something of value.

Comment by tyrrell_mcallister on Entropy and Temperature · 2014-12-17T18:51:14.463Z · score: 8 (8 votes) · LW · GW

This is a good article making a valuable point. But this —

Temperature is sometimes taught as, "a measure of the average kinetic energy of the particles," because for an ideal gas U/N = (3/2) kBT. This is wrong, for the same reason that the ideal gas entropy isn't the definition of entropy.

— is a confusing way to speak. There is such a thing as "the average kinetic energy of the particles", and one measure of this thing is called "temperature" in some contexts. There is nothing wrong with this as long as you are clear about what context you are in.

If you fall into the sun, your atoms will be strewn far and wide, and it won't be because of something "in the mind". There is a long and perfectly valid convention of calling the relevant feature of the sun its "temperature".

Comment by tyrrell_mcallister on Is arguing worth it? If so, when and when not? Also, how do I become less arrogant? · 2014-12-05T23:13:13.539Z · score: 0 (0 votes) · LW · GW

Yes.

As I added in my reply to him, his reply did help me with other parts of his argument. But I needed more iterations of questions and clarifications before I could understand that particular phrase better.

This doesn't seem to me like wasted effort, though, because I expect that what he did clarify would have helped me to understand that particular phrase, had we continued to discuss it. So, while I can't explain that particular phrase better than I could before, I expect that I am closer to understanding it. Certainly, partial illumination of the argument surrounding a specific sentence is normally the preamble to full illumination of that specific sentence, if this full illumination ever happens.

Comment by tyrrell_mcallister on Is arguing worth it? If so, when and when not? Also, how do I become less arrogant? · 2014-11-30T15:22:01.959Z · score: 1 (1 votes) · LW · GW

When I find someone else's argument puzzling, it is often for a reason that they didn't anticipate. Because they didn't anticipate that I would find a particular step puzzling in a particular way, they didn't explain this step, at least not in a way that I understood.

Thus, I need them to (1) be willing to do the work of understanding which step I found puzzling and why, and (2) be willing to do the work of addressing my idiosyncratic confusion. (They will perceive my confusion as idiosyncratic, because this is the first time that they are encountering it.*)

Both of those steps require some work on their part. Moreover, they need to do this work to bridge a step that seemed obvious to them, and hence which seemed like it could be missed only by someone who is, in a certain sense, unusually stupid. This automatically puts me under suspicion of being "not worth the time", either because I'm stupid or because I'm asking in bad faith. (See Expecting Short Inferential Distances.)

So, most people aren't willing to undertake this work unless they have some sympathy for me. The other lines of Rapaport's advice serve to build this sympathy, so they should happen before I attempt the "re-express clearly and vividly" stage.

When I do attempt a "re-expression" as part of my process of understanding their argument, my first attempt is accompanied by something like "Here is my attempt to restate what you are saying, but I know that it is probably wrong. This attempt is just to give you something to work with as you address the error in my understanding of your meaning." (Here's an example of my doing this.)

This may seem overly humble or deferential, but, in my experience, it is effective and literally true. This kind of expression really does make people more willing to attempt a helpful reply, and their replies really do fill in gaps in my understanding of their position. (Again, see the above example. I didn't entirely resolve my confusion, but I did come way understanding my interlocutor's position better.)


* However, if I continue to profess confusion over this step, and I haven't made myself sympathetic by following the rest of Rapaport's advice, then my professions won't be chalked up to idiosyncratic confusion, but rather to willful stupidity or bad faith.

Comment by tyrrell_mcallister on Is arguing worth it? If so, when and when not? Also, how do I become less arrogant? · 2014-11-29T23:03:35.084Z · score: 0 (0 votes) · LW · GW

Notice that "re-express your target’s position clearly" was not the entirety of Rapaport's advice, or even of that line of his advice.

Comment by tyrrell_mcallister on Belief in Self-Deception · 2014-11-17T00:42:52.949Z · score: 0 (0 votes) · LW · GW

I think that there's a better chance that he'll see your comment if you reply directly to the post rather than to another comment. At least, I think that that's how it works.

Comment by tyrrell_mcallister on Why is the A-Theory of Time Attractive? · 2014-11-04T22:12:26.597Z · score: 1 (1 votes) · LW · GW

But that's A-theory, not presentism, which is being explained, right? This paper claims there's a distinction

Yes. One can certainly be an A-theorist without being a presentist. Some people really have subscribed to so-called "moving spotlight" theories. (Hermann Weyl was an example.)

I'm less convinced that anyone was ever a presentist but not an A-theorist. The paper you cite doesn't convince me for at least the following reasons.

First, the paper doesn't even argue that any non-A-theorist presentists have ever actually existed. Rather, the paper attempts to show that such a theory is, as it were, technically possible.

Second, I don't buy that the paper succeeds even at this. The author constructs the theory in Section 4. But the constructions essentially depends on a loophole: A-theories must posit A-properties, he says, but existence is not a property. Then, in Section 5.3, he deals with what seems to me to be the obvious reply. He allows that maybe A-theories only require A-facts, and not necessarily A-properties. If existence is a fact, then his construction fails. His reply is that "it is still possible to be a presentist without being an A-theorist: we need simply deny the existence of facts. ... If there are no facts at all then there are no existence facts. ... This is not an unreasonable view. There are metaphysical systems that do not posit facts—versions of substance theory, bundle theory, and so on."

I find this unconvincing. I don't know enough about these other theories to know how they get by without facts. But I suspect that they introduce some kind of things, call them faks, that do the work of facts. I suspect that the A-theory could just as well be held to require only that there are A-faks.