Posts
Comments
I just saw this post today. I was a little worried that I'd somehow subconsciously stolen this concept and its name from you until I saw your link to my comment. At any rate you definitely described it more memorably than I did.
Giving up this new technology would be analogous to living like a quaker today
Perhaps you meant "Amish" or "Mennonite" rather than "quaker"?
Nice article all around!
Another error that conspiracy-theorists make is to "take the org chart literally".
CTists attribute superhuman powers to the CIA, etc., because they suppose that decision-making in these organizations runs exactly as shown on the chart. Each box, they suppose, takes in direction from above and distributes it below just as infallibly as the lines connecting the boxes are drawn on the chart.
If you read org charts literally, it looks like leaders at the top have complete control over everything that their underlings do. So of course the leader can just order the underlings not to defect or leak or baulk at tasks that seem beyond the pale!
This overly literal reading of the org chart obscures the fact that all these people are self-interested agents, perhaps with only a nominal loyalty to the structure depicted on the chart. But many CTists miss this, because they read the org chart as if it were a flowchart documenting the dependencies among subroutines in a computer program.
LW is academic philosophy, rebooted with better people than Plato as its Pater Patriae.
LW should not be comparing itself to Plato. It's trying to do something different. The best of what Plato did is, for the most part, orthogonal to what LW does.
You can take the LW worldview totally onboard and still learn a lot from Plato that will not in any way conflict with that worldview.
Or you may find Plato totally useless. But it won't be your adoption of the LW memeplex alone that determines which way you go.
Also, your empathy reassures them that you will be ready with truly helpful help if they do later want it.
I agree that a rich person won't tolerate disposable products where more durable versions are available. Durability is a desirable thing, and people who can afford it will pay for it when it's an option.
But imagine a world where washing machines cost as much as they do in our world, but all washing machines inevitably break down after a couple years. Durable machines just aren't available.
Then, in that world, you have to be wealthier to maintain your washing-machine-owning status. People who couldn't afford to repurchase a machine every couple of years would learn to do without. But people who could afford it would consider it an acceptable cost of living in the style to which they have become accustomed.
Did your really need to say that you'd be brief? Wasn't it enough to say that you'd omit needless words? :)
It seems unlikely that joining a specific elite is terminally valuable as such, except to ephemeral subagents that were built for instrumental reasons to pursue it.
It seems quite likely that people seek to join whatever elite they can as a means to some more fundamental ends. Those of us who aren't driven to join the elite are probably satisfying our hunger to pursue those more fundamental ends in other ways.
For example, people might seek elite status in part to win security against bad fortune or against powerful enemies. But it might seem to you that there are other ways to be more secure against these things. It might even seem that being elite would leave you more exposed to such dangers.
For example, if you think that the main danger is unaligned AI, then you won't think of elite status as a safe haven, so you'll be less motivated to seek it. You'll find that sense of security in doing something else that seems to address that danger better.
I've played lot of role-playing games back in my day and often people write all kinds of things as flavour text. And none of it is meant to be taken literally.
This line gave me an important insight into how you were thinking.
The creators were thinking of it as a community trust-building exercise. But you thought that it was intended to be a role-playing game. So, for you, "cooperate" meant "make the game interesting and entertaining for everyone." That paints the risk of taking the site down in a very different light.
And if there was a particular goal, instead of us being supposed to decide for ourselves what the goal was, then maybe it would have made sense to have been clear about it?
But the "role-playing game" glasses that you were wearing would have (understandably) made such a statement look like "flavor text".
I wrote a LessWrong post that addressed this: What Bayesianism Taught Me
Typo: "And that's why the thingies you multiply probabilities by—the thingies that you use to weight uncertain outcomes in your imagination,"
Here, "probabilities" should be "utilities".
Trying the pill still makes you the kind of person who tries pills. Not trying really does avoid that.
You may be interpreting "signalling" in a more specific way than I intended. You might be thinking of the kind of signalling that is largely restricted to status jockeying in zero-sum status games.
But I was using "signaling tool" in a very general sense. I just mean that you can use the signaling tool to convey information, and that you and your intended recipients have common knowledge about what your signal means. In that way, it's basically just a piece of language.
As with any piece of language, the fact that it signals something does place restrictions on what you can do.
For example, you can't yell "FIRE!" unless you are prepared to deal with certain consequences. But if the utterance "FIRE!" had no meaning, you would be freer, in a sense, to say it. If the mood struck you, you could burst out with a loud shout of "FIRE!" without causing a big commotion and making a bunch of people really angry at you.
But you would also lack a convenient tool that reliably brings help when you need it. This is a case where I think that the value of the signal heavily outweighs the restrictions that the signal's existence places on your actions.
Good points.
I'm having a hard time separating this from the 'offense' argument that you're not including.
I agree that part of offense is just "what it feels like on the inside to anticipate diminished status".
Analogously, part of the pain of getting hit by a hammer is just "what it feels like on the inside to get hit by a hammer."
However, in both cases, neither the pain nor the offense is just passive internal information about an objective external state of affairs. They include such information, but they are more than that. In particular, in both cases, they are also what it feels like to execute a program designed by evolution to change the situation.
Pain, for example, is an inducement to stop any additional hammer blows and to see to the wounds already inflicted. More generally, pain is part of an active program that is interacting with the world, planning responses, anticipating reactions to those responses, and so on. And likewise with offense.
The premise of my distinction between "offense" and "diminished status" is this. I maintain that we can conceptually separate the initial and unavoidable diminished status from the potential future diminished status.
The potential future diminished status depends on how the offendee responds. The emotion of offense is heavily wrapped up in this potential future and in what kinds of responses will influence that future. For that reason, offense necessarily involves the kinds of recursive issues that Katja explores.
In the end, these recursive issues will have to be considered. (They are real, so they should be reflected in our theory in the end.) But it seems like it should be possible to see what initial harm, if any, occurs before the recursion kicks in.
In the examples that occur to me, both sides agree that mocking the culture in question would be bad. They just disagree about whether the person accused of CA is doing that.
Do you have in mind a case in which the accused party defended themselves by saying that the appropriated culture should be mocked?
That seems like a different kind of dispute that follows a different rhetorical script, on both sides. For example, critics of Islam will be accused of Islamophobia, not cultural appropriation. And people accused of CA are more likely to defend themselves by saying that they're honoring the culture. They will not embrace the claim that they are mocking it.
I'm not contesting the claim that mockery can be good in some cases. But that point isn't at the crux of the arguments over cultural appropriation that I've seen. Disputes where the goodness of mockery is at the crux will not be of the kind that I'm considering here.
I am not asserting that those aspects of "westward" apply to "factward".
Analogies typically assert a similarity between only some, not all, aspects of the two analogous situations. But maybe those aspects of "westward" are so salient that they interfere with the analogy.
suppose that I agree with Sam Harris that ~all humans find the same set of objective facts to be morally motivating. But then it turns out that we disagree on just which facts those are! How do we resolve this disagreement? We can hardly appeal to objective facts, to do so…
I don't follow. Sam would say (and I would agree) that which facts which humans find motivating (in the limit of ideal reflection, etc.) is an empirical question. With regard to each human, it is a scientific question about that human's motivational architecture.
It's true that a moral realist could always bridge the is–ought gap by the simple expedient of converting every statement of the form "I ought to X" to "Objectively and factually, X is what I ought to do".
But that is not enough for Sam's purposes. It's not enough for him that every moral claim is or is not the case. It's not enough that moral claims are matters of fact. He wants them to be matters of scientific fact.
On my reading, what he means by that is the following: When you are pursuing a moral inquiry, you are already a moral agent who finds certain objective and scientifically determinable facts to be motivating (inducing of pursuit or avoidance). You are, as Eliezer puts it, "created already in motion". Your inquiry, therefore, is properly restricted just to determining which scientific "is" statements are true and which are false. In that sense, moral inquiry reduces entirely to matters of scientific fact. This is the dialectical-argumentation point of view.
But his interlocutors misread him to be saying that every scientifically competent agent should find the same objective facts to be motivating. In other words, all such agents should [edit: I should have said "would"] feel compelled to act according to the same moral axioms. This is what "bridging the is–ought gap" would mean if you confined yourself to the logical-argumentation framework. But it's not what Sam is claiming to have shown.
If you're trying to convince me to do some thing X, then you must want me to do X, too. So we must be at least that aligned.
We don't have to be aligned in every regard. And you needn't yourself value every consequence of X that you hold up to me to entice me to X. But you do have to understand me well enough to know that I find that consequence enticing.
But that seems to me to be both plausible and enough to support the kind of dialectical moral argumentation that I'm talking about.
Thank you for the link to the transcript. Here are the parts that I read in that way (emphasis added):
[Sam:] So it seems that goal-directed behavior is implicit (or even explicit) in this definition of intelligence. And so whatever intelligence is, it is inseparable from the kinds of behavior in the world that result in the fulfillment of goals. So we’re talking about agents that can do things; and once you see that, then it becomes pretty clear that if we build systems that harbor primary goals—you know, there are cartoon examples here like making paperclips—these are not systems that will spontaneously decide that they could be doing more enlightened things than (say) making paperclips.
This moves to the question of how deeply unfamiliar artificial intelligence might be, because there are no natural goals that will arrive in these systems apart from the ones we put in there. And we have common-sense intuitions that make it very difficult for us to think about how strange an artificial intelligence could be. Even one that becomes more and more competent to meet its goals.
[...]
[Sam:] One thing this [paperclip-maximizer] thought experiment does: it also cuts against the assumption that [...] we’re not going to build something that is superhuman in competence that could be moving along some path that’s as incompatible with our wellbeing as turning every spare atom on Earth into a paperclip.
A bit later, Sam does deny that facts and values are "orthogonal" to each other, but he does so in the context of human minds ("we" ... "us") in particular:
Sam: So generally speaking, when we say that some set of concerns is orthogonal to another, it’s just that there’s no direct implication from one to the other. Some people think that facts and values are orthogonal to one another. So we can have all the facts there are to know, but that wouldn’t tell us what is good. What is good has to be pursued in some other domain. I don’t happen to agree with that, as you know, but that’s an example.
Eliezer: I don’t technically agree with it either. What I would say is that the facts are not motivating. “You can know all there is to know about what is good, and still make paperclips,” is the way I would phrase that.
Sam: I wasn’t connecting that example to the present conversation, but yeah.
So, Sam and Eliezer agree that humans and paperclip maximizers both learn what "good" means (to humans) from facts alone. They agree that humans are motivated by this category of "good" to pursue those things (world states or experiences or whatever) that are "good" in this sense. Furthermore, that a thing X is in this "good" category is an "is" statement. That is, there's a particular bundle of exclusively "is" statements that captures just the qualities of a thing that are necessary and sufficient for it to be "good" in the human sense of the word.
More to my point, Sam goes on to agree, furthermore, that a superintelligent paperclip maximizer will not be motivated by this notion of "good". It will be able to classify things correctly as "good" in the human sense. But no amount of additional scientific knowledge will induce it to be motivated by this knowledge to pursue good things.
Sam does later say that "There are places where intelligence does converge with other kinds of value-laden qualities of a mind":
[Sam:] I do think there’s certain goals and certain things that we may become smarter and smarter with respect to, like human wellbeing. These are places where intelligence does converge with other kinds of value-laden qualities of a mind, but generally speaking, they can be kept apart for a very long time. So if you’re just talking about an ability to turn matter into useful objects or extract energy from the environment to do the same, this can be pursued with the purpose of tiling the world with paperclips, or not. And it just seems like there’s no law of nature that would prevent an intelligent system from doing that.
Here I read him again to be saying that, in some contexts, such as in the case of humans and human-descendant minds, intelligence should converge on morality. However, no law of nature guarantees any such convergence for an arbitrary intelligent system, such as a paperclip maximizer.
This quote might make my point in the most direct way:
[Sam:] For instance, I think the is-ought distinction is ultimately specious, and this is something that I’ve argued about when I talk about morality and values and the connection to facts. But I can still grant that it is logically possible (and I would certainly imagine physically possible) to have a system that has a utility function that is sufficiently strange that scaling up its intelligence doesn’t get you values that we would recognize as good. It certainly doesn’t guarantee values that are compatible with our wellbeing. Whether “paperclip maximizer” is too specialized a case to motivate this conversation, there’s certainly something that we could fail to put into a superhuman AI that we really would want to put in so as to make it aligned with us.
A bit further on, Sam again describes how, in his view, "ought" evaporates into "is" statements under a consequentialist analysis. His argument is consistent with my "dialectical" reading. He also reiterates his agreement that sufficient intelligence alone isn't enough to guarantee convergence on morality:
[Sam:] This is my claim: anything that you can tell me is a moral principle that is a matter of oughts and shoulds and not otherwise susceptible to a consequentialist analysis, I feel I can translate that back into a consequentialist way of speaking about facts. These are just “is” questions, just what actually happens to all the relevant minds, without remainder, and I’ve yet to find an example of somebody giving me a real moral concern that wasn’t at bottom a matter of the actual or possible consequences on conscious creatures somewhere in our light cone.
Eliezer: But that’s the sort of thing that you are built to care about. It is a fact about the kind of mind you are that, presented with these answers to these “is” questions, it hooks up to your motor output, it can cause your fingers to move, your lips to move. And a paperclip maximizer is built so as to respond to “is” questions about paperclips, not about what is right and what is good and the greatest flourishing of sentient beings and so on.
Sam: Exactly. I can well imagine that such minds could exist ...
Sam Harris grants the claim that you find objectionable (see his podcast conversation with Yudkowsky). So it’s not the crux of the disagreement that this post is about.
It certainly shows that Eliezer understands the distinction that I'm highlighting.
Again you may object that this is circular reasoning and I am assuming ought right in the statement 1. But it would be like saying that I am assuming to have two apples. Sure, I am assuming that. And what is the problem exactly?
The difference from Sean Carroll's point of view (logical argumentation) is that not every scientifically competent agent will find this notion of "ought" compelling. (Really, only chess-playing programs would, if "ought" is taken in a terminal-value sense.) Whereas, such an agent's scientific competence would lead it to find compelling the axiom that you have two apples.
And I think that Sam Harris would agree with that, so far as it goes. But he would deny that this keeps him from reducing "ought" statements to purely scientific "is" statements, because he's taking the dialectical-argumentation point of view, not the logical-argumentation one. At any rate, Harris understands that a superintelligent AI might not be bothered by a universe consisting purely of extreme suffering. This was clear from his conversation with Eliezer Yudkowsky.
This post isn't arguing for any particular moral point of view over another, so you'll get no debate from me :).
Just to elaborate on the point of the post, though:
From the logical-argumentation point of view, something like the unpacking that you describe is necessary, because a moral argument has to conclude with an "ought" statement, in which "ought" appears explicitly, so the "ought" has to get introduced somewhere along the way, either in the original axioms or as a subsequent definition.
From the dialectical-argumentation point of view, this unpacking of "ought" is unnecessary, at least within the moral argument itself.
Granted, the persuader will need to know what kinds of "is" facts actually persuade you. So the persuader will have to know that "ought" means whatever it means to you. But the persuader won't use the word "ought" in the argument, except in some non-essential and eliminable way.
It's not like the persuader should have to say, "Do X, because doing X will bring about world W, and you assign high moral weight or utility to W."
Instead, the persuader will just say, "Doing X will bring about world W". That's purely an "is" statement. Your internal process of moral evaluation does the rest. But that process has to happen inside of you. It shouldn't—indeed, it can't—be carried out somehow within the statements of the argument itself.
First, can you clarify what you mean by rational persuasion, if you are distinguishing it from logical proof?
I don't mean to distinguish it from logical proof in the everyday sense of that term. Rational persuasion can be as logically rigorous as the circumstances require. What I'm distinguishing "rational persuasion" from is a whole model of moral argumentation that I'm calling "logical argumentation" for the purposes of this post.
If you take the model of logical argumentation as your ideal, then you act as if a "perfect" moral argument could be embedded, from beginning to end, from axiomatic assumptions to "ought"-laden conclusions, as a formal proof in a formal logical system.
On the other hand, if you're working from a model of dialectical argumentation, then you act as if the natural endpoint is to persuade a rational agent to act. This doesn't mean that any one argument has to work for all agents. Harris, for example, is interested in making arguments only to agents who, in the limit of ideal reflection, acknowledge that a universe consisting exclusively of extreme suffering would be bad. However, you may think that you could still find arguments that would be persuasive (in the limit of ideal reflection) to nearly all humans.
Do you mean that we can skip arguing for some premises because we can rely on our intuition to identify them as already shared? Or do you mean that we need not aim for deductive certainty--a lower confidence level is acceptable? Or something else?
For the purposes of this post, I'm leaving much of this open. I'm just trying to describe how people are guided by various vague ideals about what ideal moral argumentation "should be".
But you're right that the word "rational" is doing some work here. Roughly, let's say that you're a rational agent if you act effectively to bring the world into states that you prefer. On this ideal, to decide how to act, you just need information about the world. Your own preferences do the work of using that information to evaluate plans of action. However, you aren't omniscient, so you benefit from hearing information from other people and even from having them draw out some of its implication for you. So you find value in participating in conversations about what to do. Nonetheless, you aren't affected by rhetorical fireworks, and you don't get overwhelmed by appeals to unreflective emotion (emotional impulses that you would come to regret on reflection). You're unaffected by the superficial features of who is telling you the information and how. You're just interested in how the world actually is and what you can do about it.
Do you need to have "deductive certainty" in the information that you use? Sometimes you do, but often you don't. You like it when you can get it, but you don't make a fetish of it. If you can see that it would be wasteful to spend more time on eking out a bit more certainty, then you won't do it.
"Rational persuasion" is the kind of persuasion that works on an agent like that. This is the rough idea.
Modest epistemology is slippery. You put forward an abstract formulation (Rule M), but "modestists" will probably not identify with it. Endorsing such an abstract view would conflict with modesty itself. Only a hedgehog would put any confidence in such a general principle, so divorced from any foxy particulars.
That's why any real-world modestist will advocate modesty only in particular contexts. That's why your friend was happy to say "Just no" about belief in God. God was not among the contexts where he thought that his being modest was warranted.
Consistent modestists don't advocate modesty "in general". They just think that, for certain people, including you and them, self-doubt is especially warranted when considering certain specific kinds of questions. Or they'll think that, for certain people, including you and them, trusting certain experts over one's own first-order reasoning is especially warranted. Now, you could ask them how their modesty could allow them to be so confident in their conclusion that modesty is warranted in just those cases. But they can consistently reply that, for people like them, that conclusion is not among the kinds of belief such that being modest is warranted.
The first several chapters of your book are very much on point, here. You're making the case that modesty is not warranted in certain cases — specific cases where your modest reader might have thought that it was (central bank policies and medical treatment). And you're providing powerful general methods for identifying such cases.
But this chapter, which argues against modesty in general, has to miss its mark. It might be pursuasive to modest hedgehogs who have universalized their modesty. But modest hedgehogs are almost a contradiction in terms.
I can think of two possibilities:
[1] that morality is based on rational thought as expressed through language
[2] that morality has a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition
Closer to [2]. Does the analogy in Section 2 make sense to you? That would be my starting point for trying to explain further.
∆-ness does not depend on the point of observation. If you like, just stipulate that you always view the configuration from a point outside the affine span of the configuration but on the line perpendicular to the affine span and passing through the configuration's barycenter. Then regular triangles, and only regular triangles, will project to regular triangles on your 2-dimensional display.
An interesting thought-experiment. But I don't follow this part:
So in theory we could hand it off to human philosophers or some other human-based procedure, thus dealing with "complexity of value" without much risk.
The complexity of value has to do with how the border delineating good outcomes from all possible outcomes cannot be specified in a compact way. Granted, the space of possible polygon arrangements is smaller than the space of possible atom arrangements. That does make the space of possible outcomes relatively more manageable in your VR world. But the space of outcomes is still Vast. It seems Vast enough that the border separating good from bad is still complex beyond our capacity to specify.
There was a historical attempt to summerise all major Less Wrong posts, an interesting but incomplete project. It was also approach without a usefully normalised approach. Ideally, every article would have its own page which could be heavily tagged up with metadata such a themes, importance, length, quality, author and such. Is this the goal of the wiki?
I wrote a dozen or two of those summaries. My goal was to write a highly distilled version of the post itself.
I aimed for summaries that were about four or five sentences long. Very roughly speaking, I tried to have a sentence for each principle thesis, and a sentence for each supporting argument. As a self-imposed constraint, I kept my summaries under 70 words.
For me, the summary should capture just the logical structure supporting the final take-away of the post, while losing all the anecdotes, illustrative examples, tribal signals, pseudo-dialectical back-and-forth, and discursive meanderings in the original.
Even when cars were new they couldn't be overbuilt the way buildings were in prehistory because they still had to be able to move themselves around.
Which is interesting corroboration in light of CronoDAS's comment that cars have been getting more durable, not less.
And cars are significantly more durable
That is an important counter-weight to the claims in the article I linked to.
ETA: Though maybe it's actually consistent in light of dogiv's observation that there were always limits on how much you could overbuild cars.
Planned obsolescence alone doesn't explain the change over time of this phenomenon. It's a static explanation, one which applies equally well to every era, unless something more is said. So the question becomes, Why are manufacturers planning for sooner obsolescence now than they did in the past?
Likewise, "worse materials cost less" is always true. It's a static fact, so it can't explain the observed dynamic phenomenon by itself. Or, at least, you need to add some additional data, like, "materials are available now that are worse than what used to be available". That might explain something. It would be another example of things being globally better in a perverse sense (more options = better).
Totally unrealistic.
Thorin was never in a position to hire mirthril miners. He had sufficient capital for only a very brief time before dying at the Battle of Five Armies.
You are emphasizing the truth-values at the nodes of the belief network ("check back to Q and P"). That is important. After all, in the end, you do want to have the right truth-values in the buckets.
But there are also structural questions about the underlying graph. Which edges should connect the nodes and, perhaps more deeply, which nodes should be there in the first place? When should new nodes be created? These are the questions addressed by Phil's and Anna's posts.
An enticing advert.
Possible typo: "life it out".
Of course, when calling people idiots for not agreeing with material that is called crackpot, you had better be careful, because if you are not right about the material, if it is crackpot, you are gone for good.
But you aren't "gone for good". You will have your own tribe of believers who will still support you. Before they had been called "fuckwits" they might have deserted you when the evidence didn't go your way. But they're not going to desert you now, not when doing so would be tantamount to admitting that they were fuckwits all along.
If someone who wanted to learn to dance were to say: For centuries, one generation after the other has learned the positions, and it is high time that I take advantage of this and promptly begin with the quadrille—people would presumably laugh a little at him, but in the world of spirit [i.e., development of one's soul] this is [thought to be] very plausible. What, then, is education? I believed it is the course the individual goes through in order to catch up with himself, and the person who will not go through this course is not much helped by being born in the most enlightened age.
Søren Kierkegaard, Fear and Trembling, III 96 (trans. H. V. Hong and E. H. Hong). Annotations are mine.
[A]ctually a 95% confidence interval is an interval generated by a process, where the process has a 95% chance of generating a confidence interval that contains the true mean.
Is it incorrect for a Bayesian to gloss this as follow?
Given (only) that this CI was generated by process X with input 0.95, this CI has a 95% chance of containing the true mean.
I could imagine a frequentist being uncomfortable with talk of the "chance" that the true mean (a certain fixed number) is between two other fixed numbers. "The true mean either is or is not in the CI. There's no chance about it." But is there a deeper reason why a Bayesian would also object to that formulation?
Smart phones are primarily pocket-sized PCs. Many of their most-attractive features could be developed only with strong expertise in computer and computer-interface design. Apple was world-class in these areas. Granted, the additional feature of being a phone was outside of Apple's wheelhouse. Nonetheless, Apple could contribute strong expertise to all but one of the features in the sum
(features of a pocket-sized PC) + (the feature of being a phone).
Somehow, this one remaining feature (phoning) got built into the name "smart phone". But the success of the iPhone is due to how well the other features were implemented. It turned out that being a phone could be done sufficiently well without expertise in building phones, given strong expertise in building pocket-sized PCs.
In general terms, Apple identified an X (phones) that could be improved by adding Y (features of PCs). They set themselves to making X+Y. Crucially, Y was something in which Apple already had tremendous expertise. True, the PC features would have to be constrained by the requirement of being a phone. (Otherwise, you get this.) But the hardest part of that is miniaturization, and Apple already had expertise in this, too. So, Apple had expertise in Y and in a major part of combining X and Y.
In other words, this was not a case of a non-expert beating experts at their own game. It was a case of a Y-expert beating the X-experts (or Xperts, if you will) at making X+Y.
On the other hand, PhilGoetz identified an X (cars) that could be improved by adding Y (good cup-holders). In contrast to Apple's case, Phil displays no expertise in Y at all. In particular, he displays no expertise at the hardest part of combining X and Y, which getting the cup-holder to fit in the car without getting in the way of anything else more important.
If Phil turned out to be right, it really would be a case of a non-expert beating the experts. So it would be much more surprising than Apple's beating Nokia.
A special case of this fallacy that you often see is
Your Axioms (+ My Axioms) yield a bald contradiction. Therefore, your position isn't even coherent!
This is a special case of the fallacy because the charge of self-contradiction could stick only if the accused person really subscribed to both Your Axioms and My Axioms. But this is only plausible because of an implicit argument: "My Axioms are true, so obviously the accused believes them. The accused just hasn't noticed the blatant contradiction that results."
I think that this problem is fixed by reducing your identity even further:
"I am a person who aims to find the right and good way for me to be, and my goal is to figure out how to make myself that way."
This might seem tautological and vacuous. But living up to it means actually forming hypotheses about what the good way to be is, and then testing those hypotheses. I'm confident that "being effective" is part of the good way to be. But, as you point out, effectiveness alone surely isn't enough. Effectively doing good things, not bad things, makes all the difference.
At any rate, effectiveness itself is only a corollary of the ultimate goal, which is to be good. As a mere corollary, effectiveness does not endanger my recognition of other aspects of being good, such as keeping promises and maintaining a certain kind of loyalty to my local group.
The upshot, in my view, is that AnnaSalamon's approach ultimately converges on virtue ethics.
Why is this being downvoted (apart from misspelling the name)? I take the quote to be a version of "If it's stupid and works, it's not stupid."
Experience has shown that it is by no means difficult for philosophy to begin. Far from it. It begins with nothing, and consequently can always begin. But the difficulty, both for philosophy and for philosophers, is to stop.
Søren Kierkegaard, Either/Or, vol. 1 (trans. Swenson & Swenson).
Does the disagreement, whatever it is, have any more impact on anything outside itself than semiotics does?
I can't say how it compares to semiotics because I don't know that field or its history.
If you're just asking whether foundations-of-math questions have had any impact outside of themselves, then the answer is definitely Yes.
For example, arguments about the foundations of mathematics led to developments in logic and automated theorem proving. Gödel worked out his incompleteness theorems within the context of Russell and Whitehead's Principia Mathematica. One of the main purposes of PM was to defend the logicist thesis that mathematical claims are just logical tautologies concerning purely logical concepts. Also, PM is the first major contribution that I know of to the study of Type Theory, which in turn is central in automated theorem proving.
Also, if you're trying to assess whether you believe in the Tegmark IV multiverse, which says that everything is math, then what you think math is is probably going to play some part in that assessment. Maybe that is just a case of one pragmatically-pointless question's bearing on another, but there it is.
If it meant something, semioticians could take actual sentences, and then show how the two opposing views provide different interpretations of those sentences
Is that fair?
Everyone agrees that 2+2=4, but people disagree about what that statement is about. Within the foundations of mathematics, logicists and formalists can have a substantive disagreement even while agreeing on the truth-value of every particular mathematical statement.
Analogously, couldn't semioticians agree about the interpretation of every text, but disagree about the nature of the relationship between the text and its correct interpretation? Granted that X is the correct interpretation of Y, what exactly is it about X and Y that makes this the case? Or is there some third thing Z that makes X the correct interpretation of Y? Or is Z not a thing in its own right, but rather a relation among things? And, if so, what is the nature of that relation? Aren't those the kinds of questions that semioticians disagree about?
No, I don't think so. But I'm not sure how to elaborate without knowing why you thought that.
Last I checked, your edits haven't changed which answer is correct in your scenario. As you've explained, the Ace is impossible given your set-up.
(By the way, I thought that the earliest version of your wording was perfectly adequate, provided that the reader was accustomed to puzzles given in a "propositional" form. Otherwise, I expect, the reader will naturally assume something like the "algorithmic" scenario that I've been describing.)
In my scenario, the information given is not about which propositions are true about the outcome, but rather about which algorithms are controlling the outcome.
To highlight the difference, let me flesh out my story.
Let K be the set of card-hands that contain at least one King, let A be the set of card-hands that contain at least one Ace, and let Q be the set of card-hands that contain at least one Queen.
I'm programming the card-dealing robot. I've prepared two different algorithms, either of which could be used by the robot:
Algorithm 1: Choose a hand uniformly at random from K ∪ A, and then deal that hand.
Algorithm 2: Choose a hand uniformly at random from Q ∪ A, and then deal that hand.
These are two different algorithms. If the robot is programmed with one of them, it cannot be programmed with the other. That is, the algorithms are mutually exclusive. Moreover, I am going to use one or the other of them. These two algorithms exhaust all of the possibilities.
In other words, of the two algorithm-descriptions above, exactly one of them will truthfully describe the robot's actual algorithm.
I flip a coin to determine which algorithm will control the robot. After the coin flip, I program the robot accordingly, supply it with cards, and bring you to the table with the robot.
You know all of the above.
Now the robot deals you a hand, face down. Based on what you know, which is more probable: that the hand contains a King, or that the hand contains an Ace?
Ace is not more probable.
Ace is more probable in the scenario that I described.
Of course, as you say, Ace is impossible in the scenario that you described (under its intended reading). The scenario that I described is a different one, one in which Ace is most probable. Nonetheless, I expect that someone not trained to do otherwise would likely misinterpret your original scenario as equivalent to mine. Thus, their wrong answer would, in that sense, be the right answer to the wrong question.