Posts

Causation, Probability and Objectivity 2012-03-18T06:54:33.513Z

Comment by antigonus on Is the orthogonality thesis at odds with moral realism? · 2013-11-06T03:12:01.535Z · LW · GW

I agree with vallinder's point, and would also like to add that arguments for moral realism which aren't theistic or contractarian in nature typically appeal to moral intuitions. Thus, instead of providing positive arguments for realism, they at best merely show that arguments for the unreliability of realists' intuitions are unsound. (For example, IIRC, Russ Shafer-Landau in this book tries to use a parity argument between moral and logical intuitions, so that arguments against the former would have to also apply to the latter.) But clearly this is an essentially defensive maneuver which poses no threat to the orthogonality thesis (even if motivational judgment internalism is true), because the latter works just as well when you substitute "moral intuition" for "goal."

Comment by antigonus on Causation, Probability and Objectivity · 2012-03-18T21:49:34.757Z · LW · GW

If I scratch my nose, that action has no truth value. No color either.

The proposition "I scratched my nose" does have a truth value.

Bayesian epistemology maintains that probability is degree of belief. Assertions of probabilities are therefore assertions of degrees of belief, which are psychological claims and therefore obviously have or can have truth-value. Of course, Bayesians can be more nuanced and take some probability claims to be about degrees of belief in the minds of some idealized reasoner; but "the degree of belief of an idealized reasoner would be X given such-and-such" is still truth-evaluable.

See the distinction. Don't hand wave it with "it's all the same", "that's just semantics", etc. You started saying that this is more of a question. I've tried to clarify the answer to you.

The question was primarily about the role of probability in Pearl's account of causality, not the basic meaning of probability in Bayesian epistemology.

Comment by antigonus on Causation, Probability and Objectivity · 2012-03-18T21:23:49.287Z · LW · GW

Nope, I wasn't familiar. Very interesting, thanks!

Comment by antigonus on Causation, Probability and Objectivity · 2012-03-18T21:21:38.512Z · LW · GW

Probability assignments don't have truth value,

Sure they do. If you're a Bayesian, an agent truly asserts that the (or, better, his) probability of a claim is X iff his degree of belief in the claim is X, however you want to cash out "degree of belief". Of course, there are other questions about the "normatively correct" degrees of belief that anyone in the agent's position should possess, and maybe those lack determinate truth-value.

Comment by antigonus on Causation, Probability and Objectivity · 2012-03-18T20:39:04.198Z · LW · GW

I don't see the relation between the two. It seems like you're pointing out that Jaynes/people here don't believe there are "objectively correct" probability distributions that rationality compels us to adopt. But this is compatible with there being true probability claims, given one's own probability distribution - which is all that's required.

Comment by antigonus on Causation, Probability and Objectivity · 2012-03-18T17:21:29.292Z · LW · GW

That statement is too imprecise to capture Jaynes's view of probability.

Of course; it wasn't intended to capture the difference between so-called objective Bayesianism vs. subjective Bayesianism. The tension, if it arises at all, arises from any sort of Bayesianism. That the rules prescribed by Jaynes don't pick out the "true" probability distributions on a certain question is compatible with probability claims like "It will probably rain tomorrow" having a truth-value.

Comment by antigonus on Causation, Probability and Objectivity · 2012-03-18T07:57:37.582Z · LW · GW

I don't understand where the tension is supposed to come in.

It just seems really weird to be able to correctly say that A caused B when, in fact, A had nothing to do with B. If that doesn't seem weird to you, then O.K.

The idea that causation is in the mind, not in the world is part of the Humean tradition

I think that's unclear; I side with those who think Hume was arguing for causal skepticism rather than some sort of subjectivism.

Comment by antigonus on Evolutionary psychology: evolving three eyed monsters · 2012-03-18T00:10:34.641Z · LW · GW

No considerations are given for the strength of the advantage

I wish this were stressed more often. It's really easy to think up selective pressures on any trait and really hard to pin down their magnitude. This means that most armchair EP explanations have very low prior probabilities by default, even if they seem intuitively reasonable.

Comment by antigonus on Cult impressions of Less Wrong/Singularity Institute · 2012-03-15T07:31:31.967Z · LW · GW

The word "cult" never makes discussions like these easier. When people call LW cultish, they are mostly just expressing that they're creeped out by various aspects of the community - some perceived groupthink, say. Rather than trying to decide whether LW satisfies some normative definition of the word "cult," it may be more productive to simply inquire as to why these people are getting creeped out. (As other commenters have already been doing.)

Comment by antigonus on Cult impressions of Less Wrong/Singularity Institute · 2012-03-15T03:17:02.440Z · LW · GW

Don't mindkill their cached thoughts.

Comment by antigonus on Anyone have any questions for David Chalmers? · 2012-03-12T03:38:18.432Z · LW · GW

Comment by antigonus on Anyone have any questions for David Chalmers? · 2012-03-11T18:48:21.428Z · LW · GW

Do you feel this is a full rebuttal to McDermott's paper? I agree that his generalized argument against "extendible methods" is a straw man; however, he has other points about Chalmers' failure to argue for existing extendible methods being "extendible enough."

Comment by antigonus on Anyone have any questions for David Chalmers? · 2012-03-11T01:43:15.496Z · LW · GW

Questions on anything, or just topics that relate to the class? If the former, I'd like to hear his response to Drew McDermott's critique of his Singularity article in JCS, even though I think he's going to publish a response to it and others in the next issue.

Comment by antigonus on [Link, 2011] Team may be chosen to receive \$1.4 billion to simulate human brain · 2012-03-10T22:42:00.825Z · LW · GW

I just noticed from that document that you listed Alexander Funcke as owner of "Zelta Deta." Googling his name, I think you meant "Zeta Delta?"

Comment by antigonus on Rationality Quotes March 2012 · 2012-03-05T09:48:42.602Z · LW · GW

Yeah, you're correct. Wasn't thinking very hard.

Comment by antigonus on Rationality Quotes March 2012 · 2012-03-05T07:33:53.858Z · LW · GW

I tell you that as long as I can conceive something better than myself I cannot be easy unless I am striving to bring it into existence or clearing the way for it.

-- G.B. Shaw, "Man and Superman"

Shaw evinces a really weird, teleological view of evolution in that play, but in doing so expresses some remarkable and remarkably early (1903) transhumanist sentiments.

Comment by antigonus on How to Fix Science · 2012-03-04T07:18:11.663Z · LW · GW

You may want to check out John Earman's Bayes or Bust?.

Comment by antigonus on A defense of formal philosophy · 2012-02-18T04:32:53.629Z · LW · GW

For example, Aristotle proved lots of stuff based on the infallibility of sensation

I don't know much about Aristotle, but this claim sounds to me like a distortion of something Aristotle might have said.

Comment by antigonus on One last roll of the dice · 2012-02-04T01:37:36.035Z · LW · GW

No, never seen that before.

Comment by antigonus on One last roll of the dice · 2012-02-03T04:08:48.678Z · LW · GW

Something has gone horribly wrong here.

Comment by antigonus on Help! Name suggestions needed for Rationality-Inst! · 2012-01-29T06:47:22.014Z · LW · GW

Afterthought

Comment by antigonus on How would you talk a stranger off the ledge? · 2012-01-24T12:08:20.506Z · LW · GW

When I told people about the plan in #1, though, it was because I wanted them to listen to me. I was back off the brink for some reaon, and I wanted to talk about where I'd been. Somebody who tells you they're suicidal isn't asking you to talk him out of it; he's asking you to listen.

Just wanted to say that I relate very strongly to this. When I was heavily mentally ill and suicidal, I was afraid of reaching out to other people precisely because that might mean I only wanted emotional support rather than being serious about killing myself. People who really wanted to end their lives, I reasoned, would avoid deliberately setting off alarm bells in others that might lead to interference. That I eventually chose to open up about my psychological condition at all (and thereby deviate from the "paradigmatic" rational suicidal person) gave me evidence that I didn't want to kill myself and helped me come to terms with recovering. Sorry if this is rambling.

Comment by antigonus on The Singularity Institute's Arrogance Problem · 2012-01-22T08:07:21.027Z · LW · GW

Of course it depends on the specific papers and the nature of the publications. "Publish more papers" seems like shorthand for "Demonstrate that you are capable of rigorously defending your novel/controversial ideas well enough that very many experts outside of the transhumanism movement will take them seriously." It seems to me that doing this would change a lot of people's behavior.

Comment by antigonus on How is your mind different from everyone else's? · 2011-12-18T00:48:40.967Z · LW · GW

I don't imagine it would have nearly as much of an effect on people who aren't familiar with anime. But I would read that study in a heartbeat if it existed.

Comment by antigonus on Selfish reasons for FAI · 2011-12-18T00:35:58.426Z · LW · GW

Genie AI?

Comment by antigonus on What are you working on? December 2011 · 2011-12-14T03:26:09.838Z · LW · GW

One is the asymmetry, which is the better one, but it has weird assumptions about personhood - reasonable views either seem to suggest immediate suicide (if there is no continuity of self and future person-moments are thus brought into existence, you are harming future-you by living)

I'm not sure I remember his arguments relying on those assumptions in his asymmetry argument. Maybe he needs them to justify not committing suicide, but I thought the badness of suicide wasn't central to his thesis.

Comment by antigonus on What are you working on? December 2011 · 2011-12-14T02:25:07.496Z · LW · GW

I'm reading Benatar's Better Never To Have Been and I noticed that the actual arguments for categorical antinatalism aren't as strong as I thought and seem to hinge on either a pessimistic view of technological progress (which might well be justified)

I don't think this is true. Benatar's position is that any being that ever suffers is harmed by being created. This is not something that technological progress is very likely to relieve. Or are you thinking of some sort of wireheading?

or confusions about identity and personhood.

That sounds like an interesting criticism.

Comment by antigonus on two puzzles on rationality of defeat · 2011-12-13T19:24:04.049Z · LW · GW

I suppose one could draw from this a similar response to any Dutch book argument. Sure, if my "degree of belief" in a possible statement A is 2, I can be Dutch booked. But now that I'm licensed to disbelieve entailments (so long as I take myself to be ignorant that they're entailments), perhaps I justifiably believe that I can't be Dutch booked. So what rational constraints are there on any of my beliefs? Whatever argument you give me for a constraint C from premises P1, ..., Pn, I can always potentially justifiably believe the conditional "If the premises P1, ..., Pn are true, then C is correct" has low probability - even if the argument is purely deductive.

Comment by antigonus on two puzzles on rationality of defeat · 2011-12-13T13:45:20.938Z · LW · GW

Logical omniscience comes from probability "statics," not conditionalization. When A is any propositional tautology, P(A) (note the lack of conditional) can be algebraically manipulated via the three Kolmogorov axioms to yield 1. Rejecting one of the axioms to avoid this result leaves you vulnerable to Dutch books. (Perhaps this is not so surprising, since reasoning about Dutch books assumes classical logic. I have no idea how one would handle Dutch book arguments if we relax this assumption.)

Comment by antigonus on two puzzles on rationality of defeat · 2011-12-13T11:14:57.421Z · LW · GW

Could you explain in more detail why Bayesian epistemology can't be built without such an assumption?

Well, could you explain how to build it that way? Bayesian epistemology begins by interpreting (correct) degrees of beliefs as probabilities satisfying the Kolmogorov axioms, which implies logical omniscience. If we don't assume our degrees of belief ought to satisfy the Kolmogorov axioms (or assume they satisfy some other axioms which entail Kolmogorov's), then we are no longer doing Bayesian epistemology.

Comment by antigonus on Video Q&A with Singularity Institute Executive Director · 2011-12-13T09:20:21.782Z · LW · GW

One of the reasons given against peer review is that it takes a long time to articles to be published after acceptance. Is it not possible to make them available on your own website before they appear in the article? (I really have barely any idea how these things work; but I know that in some fields you can do this.)

Comment by antigonus on Q&A #2 with Singularity Institute Executive Director · 2011-12-13T09:13:34.251Z · LW · GW

You mentioned recently that SIAI is pushing toward publishing an "Open Problems in FAI" document. How much impact do you expect this document to have? Do you intend to keep track? If so, and if it's less impactful than expected, what lesson(s) might you draw from this?

Comment by antigonus on two puzzles on rationality of defeat · 2011-12-13T07:19:56.398Z · LW · GW

I'm interested in what you have to say, and I'm sympathetic (I think), but I was hoping you could restate this in somewhat clearer terms. Several of your sentences are rather difficult to parse, like "And to be committed to false statements as being not-false would be absurd, such that it would alone be proper to aver that one has been defeated in having previously been committed to the truth of T despite that that committment was fundamentally invalid."

Comment by antigonus on two puzzles on rationality of defeat · 2011-12-13T04:46:57.015Z · LW · GW

Sorry, I'm not sure I understand what you mean. Could you elaborate?

Comment by antigonus on two puzzles on rationality of defeat · 2011-12-12T19:48:59.887Z · LW · GW

I think a lot of the replies here suggesting that Bayesian epistemology easily dissolves the puzzles are mistaken. In particular, the Bayesian-equivalent of (1) is the problem of logical omniscience. Traditional Bayesian epistemology assumes that reasoners are logically omniscient at least with respect to propositional logic. But (1), suitably understood, provides a plausible scenario where logical omniscience fails.

I do agree that the correct understanding of the puzzles is going to come from formal epistemology, but at present there are no agreed-upon solutions that handle all instances of the puzzles.

Comment by antigonus on Rationality Quotes December 2011 · 2011-12-06T04:43:37.519Z · LW · GW

scroll to 4:40 I like his one argument: if we have finite neurons and thus cannot construct an infinite set in our "map" what makes you think that you can make it correspond to a (hypothetical) infinity in the territory?

I don't really see what this argument comes to. The map-territory metaphor is a metaphor; neural structures do not have to literally resemble the structures they have beliefs about. In fact, if they did, then the objection would work for any finite structure that had more members than there are synapses (or whatever) in the brain.

Comment by antigonus on Intuitive Explanation of Solomonoff Induction · 2011-12-02T05:01:16.669Z · LW · GW

In that case, I'd say that your response involves special pleading. SI priors are uncomputable. If the fine structure constant is uncomputable, then any uncomputable prior that assigns probability 1 to the constant having its actual value will beat SI in the long run. What is illicit about the latter sort of uncomputable prior that doesn't apply to SI priors? Or am I simply confused somehow? (I'm certainly no expert on this subject.)

Comment by antigonus on Intuitive Explanation of Solomonoff Induction · 2011-12-02T04:26:05.407Z · LW · GW

You will find that even if you're endowed with the privileged knowledge that the fine structure constant is a halting oracle, that knowledge provably can't help you win a prediction game against SI

We can frequently compute the first several terms in a non-computable sequence, so this statement seems false.

Comment by antigonus on Living Metaphorically · 2011-11-29T20:15:16.947Z · LW · GW

I'm having trouble seeing your point in the context of the rest of the discussion. Tyrrell claimed that the pre-theoretic notion of an infinite set - more charitably, perhaps, the notion of an infinite cardinality - is captured by Dedekind's formal definition. Here, "capture" presumably means something like "behaves sufficiently similarly so as to preserve the most basic intuitive properties of." Your response appears to be that there is a good metaphorical analysis of infinitude that accounts for this pre-theoretic usage as well as some others simultaneously. And by "accounts for X," I take it you mean something like "acts as a cognitive equivalent, i.e., is the actual subject of mental computation when we think about X." What is this supposed to show? Does anyone really maintain that human brains are actually processing terms like "bijection" when they think intuitively about infinity?

Comment by antigonus on LW Philosophers versus Analytics · 2011-11-29T15:10:11.510Z · LW · GW

If they really honed their skills in crushing their opponents arguments, and could transmit this skill to other successfully, then we wouldn't have so many open questions in philosophy

What is your basis for concluding this? "Philosophers are really good at demolishing unsound arguments" is compatible with "Philosophers are really bad at coming to agreement." The primary difference between philosophy and biology that explains the ideological diversity of the former and the consensus of the latter is not that philosophers are worse critical thinkers. It is that, unlike in biology, virtually all of the evidence in philosophy is itself subject to controversy.

But either way, these posts should help us decide how far off my optimism is, and how far of your realism is. Can't wait to argue about the results. Do you wanna make any suggestions in my methods of comparison and sampling? All ears.

I'm not sure that your experiment makes any sense. What exactly are you going to be comparing? Most analytic philosophers in most articles don't take themselves to be offering "solutions" to any problems. They take themselves to be offering detailed, specific lines of argumentation which suggest a certain conclusion, while accommodating or defusing rival lines of argumentation that have appeared in the literature. That someone here may come up with a vaguely similar position to philosopher X's on issue Y tells us very little and ignores the meat of X's contribution.

Comment by antigonus on Living Metaphorically · 2011-11-29T14:42:22.594Z · LW · GW

I haven't read their book, but an analysis of the pre-theoretic concept of the infinitude of a set needn't be taken as an analysis of the pre-theoretic concept of infinitude in general. "Unmarried man" doesn't define "bachelor" in "bachelor of the arts," but that doesn't mean it doesn't define it in ordinary contexts.

Comment by antigonus on LW Philosophers versus Analytics · 2011-11-29T04:03:33.526Z · LW · GW

But let us not forget, that comparing molecular biology and philosophy, is like comparing self-help and physics.

I'm comparing the review processes of molecular biology and philosophy. In both cases, experts with a deep grasp of most/all the relevant pitfalls provide extensive, specific, technical feedback regarding likely sources of error, failure to address existing objections and important points of clarification. That this is superior to a glorified Facebook "Like" button used by individuals with often highly limited familiarity with the subject matter - often consisting of having read a few blog posts by the same individual who himself has highly limited familiarity with the subject matter - should go without saying, right?

The problem with self-help writers is that, in general, they are insufficiently critical. It has never been seriously alleged that philosophers are insufficiently critical, whatever their other faults. Philosophers are virtually dying to bury each other's arguments, and spend their entire careers successfully honing their abilities to do so. Therefore, surviving the gauntlet of their reviews is a better system of natural selection than having a few casually interested and generally like-minded individuals agree that they like your non-technical idea.

Comment by antigonus on LW Philosophers versus Analytics · 2011-11-29T03:40:45.317Z · LW · GW

I guess I can't really imagine how you came to that conclusion. You seem to be going preposterously overboard with your enthusiasm for LW here. Don't mean to offend, but that's the only way I know how to express the extent of my incredulity. Can you imagine a message board of dabblers in molecular biology congratulating each other over the advantages their board's upvoting system has over peer review?

Comment by antigonus on LW Philosophers versus Analytics · 2011-11-29T02:04:15.602Z · LW · GW

And because of our practices of constant focused argument, and karma selection, to select amongst positions, instead of the usual trend-method of philosophy.

I don't understand this. Are you saying that a casual voting system by a group of amateurs on a website consisting of informal blog posts is superior to rigorous peer-review by experts of literature-aware arguments?

Comment by antigonus on Criticisms of intelligence explosion · 2011-11-26T10:38:23.021Z · LW · GW

I agree there's good reason to imagine that, had further selective pressure on increased intelligence been applied in our evolutionary history, we probably would've ended up more intelligent on average. What's substantially less clear is whether we would've ended up much outside the present observed range of intelligence variation had this happened. If current human brain architecture happens to be very close to a local maximum of intelligence, then raising the average IQ by 50 points still may not get us to any IQ 200 individuals. So while there likely is a nearby region of decreasing f(x, x+1), it doesn't seem so obvious that it's wide enough to terminate in superintelligence. Given the notorious complexity of biological systems, it's extremely difficult to extrapolate anything about the theoretical limits of evolutionary optimization.

Comment by antigonus on Beyond the Reach of God · 2011-11-25T09:46:35.659Z · LW · GW

I didn't vote down your post (or even see it until just now), but it came across as a bit disdainful while being written rather confusingly. The former is going to poorly dispose people toward your message, and the latter is going to poorly dispose people toward taking the trouble to respond to it. If you try rephrasing in a clearer way, you might see more discussion.

Comment by antigonus on Criticisms of intelligence explosion · 2011-11-24T05:40:29.287Z · LW · GW

Comment by antigonus on Intro-level training materials for rationality / critical thinking · 2011-11-24T01:28:02.933Z · LW · GW

I think he's a sincere teenager who's very new to this sort of thing. They sound, behave and type like that.

Comment by antigonus on Criticisms of intelligence explosion · 2011-11-23T06:52:33.748Z · LW · GW

Another thing: We need to distinguish between getting better at designing intelligences vs. getting better at designing intelligences which are in turn better than one's own. The claim that "the smarter you are, the better you are at designing intelligences" can be interpreted as stating that the function f(x, y) outlined above is decreasing for any fixed y. But the claim that the smarter you are, the easier it is to create an intelligence even smarter is totally different and equivalent to the aforementioned thesis about the shape of f(x, x+1).

I see the two claims conflated shockingly often, e.g., in Bostrom's article, where he simply states:

Once artificial intelligence reaches human level, there will be a positive feedback loop that will give the development a further boost. AIs would help constructing better AIs, which in turn would help building better AIs, and so forth.

and concludes that superintelligence inevitably follows with no intermediary reasoning on the software level. (Actually, he doesn't state that outright, but the sentence is at the beginning of the section entitled "Once there is human-level AI there will soon be superintelligence.") That an IQ 180 AI is (much) better at developing an IQ 190 AI than a human is doesn't imply that it can develop an IQ 190 AI faster than the human can develop the IQ 180 AI.

Comment by antigonus on Where do I most obviously still need to say "oops"? · 2011-11-23T06:26:37.365Z · LW · GW

For what it's worth, I've posted a fair number of things in my short time here that go against what I assume to be consensus, and I've mostly only been upvoted for them. (This includes posts that come close to making the cult comparison.)