Posts

Comments

Comment by indon on Why Are Individual IQ Differences OK? · 2014-08-16T15:29:07.874Z · score: 3 (3 votes) · LW · GW

And what makes you sure of that? It even looks like the outline for the three boxes along the top.

Our cultural assumptions are perhaps more subtle than the average person thinks.

Comment by indon on Why Are Individual IQ Differences OK? · 2014-08-15T17:40:58.145Z · score: 3 (3 votes) · LW · GW

If the first two shapes on the bottom are diamonds, why is the third shape a square?

Comment by indon on The True Prisoner's Dilemma · 2013-06-10T17:25:47.768Z · score: 0 (0 votes) · LW · GW

That's a good way to clearly demonstrate a nonempathic actor in the Prisoner's Dilemma; a "Hawk", who views their own payoffs and only their own payoffs as having value and placing no value to the payoffs of others.

But I don't think it's necessary. I would say that humans can visualize a nonempathic human - a bad guy - more easily than they can visualize an empathic human with slightly different motives. We've undoubtedly had to, collectively, deal with a lot of them throughout history.

A while back I was writing a paper and came across a fascinating article about types of economic actors, and that paper concluded that there are probably three different general tendencies in human behavior, and thus three general groups of human actors who have those tendencies: one that tends to play 'tit-for-tat' (who they call 'conditional cooperators'), one that tends to play 'hawk' (who they call 'rational egoists'), and one that tends to play 'grim' (who they call 'willing punishers').

So there are paperclip maximizers among humans. Only the paperclips are their own welfare, with no empathic consideration whatsoever.

Comment by indon on Standard and Nonstandard Numbers · 2013-05-19T13:51:55.424Z · score: 0 (0 votes) · LW · GW

Ah, so the statement is second-order.

And while I'm pretty sure you could replace the statement with an infinite number of first-order statements that precisely describe every member of the set (0S = 1, 0SS = 2, 0SSS = 3, etc), you couldn't say "These are the only members of the set", thus excluding other chains, without talking about the set - so it'd still be second-order.

Thanks!

Comment by indon on Standard and Nonstandard Numbers · 2013-05-19T00:38:17.385Z · score: 0 (0 votes) · LW · GW

Okay, my brain isn't wrapping around this quite properly (though the explanation has already helped me to understand the concepts far better than my college education on the subject has!).

Consider the statement: "There exists no x for which, for any number k, x after k successions is equal to zero." (¬∃x: ∃k: xS-k-times = 0, k>0 is the closest I can figure to depict it formally). Why doesn't that axiom eliminate the possibility of any infinite or finite chain that involves a number below zero, and thus eliminate the possibility of the two-sided infinite chain?

Or... is that statement a second-order one, somehow, in which case how so?

Edit: Okay, the gears having turned a bit further, I'd like to add: "For all x, there exists a number k such that 0 after k successions is equal to x."

That should deal with another possible understanding of that infinite chain. Or is defining k in those axioms the problem?

Comment by indon on Proofs, Implications, and Models · 2013-05-18T20:36:23.383Z · score: 1 (1 votes) · LW · GW

I would suggest that the most likely reason for logical rudeness - not taking the multiple-choice - is that most arguments beyond a certain level of sophistication have more unstated premises than they have stated premises.

And I suspect it's not easy to identify unstated premises. Not just the ones you don't want to say, belief-in-belief sort of things, but ones you as an arguer simply aren't sufficiently skilled to describe.

As an example:

For example: Nick Bostrom put forth the Simulation Argument, which is that you must disagree with either statement (1) or (2) or else agree with statement (3):

In the given summary (which may not accurately describe the full argument; for the purposes of the demonstration, it doesn't matter either way), Mr. Bostrom doesn't note that, presumably, the number of potential simulated earths immensely outnumbers the number of nonsimulated earths as a result of his earlier statements. But that premise doesn't necessarily hold!

If the inverse of the chances of an Earth reaching simulation-level progress without somehow self-exterminating or being exterminated (say, 1 in 100) is lower than the average number of Earth-simulations that society runs (so less than 100), then the balance of potential earths does not match the unstated premise... in universes sufficiently large for multiple Earths to exist (See? A potential hidden premise in my proposal of a hidden premise! These things can be tricky).

And if most arguers aren't good at discerning hidden premises, then arguers can feel like they're falling into a trap: that there must be a premise there, hidden, but undiscovered, that provides a more serious challenge to the argument than they can muster. And with that possibility, an average arguer might want to simply be quiet on it, expecting a more skilled arguer to discern a hidden premise that they couldn't.

That doesn't seem rude to me, but humble; a concession of lack of individual skill when faced with a sufficiently sophisticated argument.

Comment by indon on Of Gender and Rationality · 2013-05-16T22:36:13.959Z · score: 6 (8 votes) · LW · GW

Perhaps, by sheer historical contingency, aspiring rationalists are recruited primarily from the atheist/libertarian/technophile cluster, which has a gender imbalance for its own reasons—having nothing to do with rationality or rationalists; and this is the entire explanation.

This seems immensely more likely than anything on that list. Libertarian ideology is tremendously dominated by white males - coincidentally, I bet the rationality community matches that demographic - both primarily male, and primarily caucasian - am I wrong? I'm not big into the rationalist community, so this is a theoretical prediction right here. Meanwhile, which of the listed justifications is equally likely to apply to both white females and non-white males?

Now, that's not to say the list of reasons has no impact. Just that the reason you dismissed, offhand, almost certainly dominates the spread, and the other reasons are comparatively trivial in terms of impact. If you want to solve the problem you'll need to accurately describe the problem.

Comment by indon on Bayesians vs. Barbarians · 2013-05-16T22:22:29.314Z · score: 0 (2 votes) · LW · GW

I think that's an understatement of the potential danger of rationality in war. Not for the rationalist, mind, but for the enemy of the rationalist.

Most rationality, as elaborated on this site, isn't about impassively choosing to be a civilian or a soldier. It's about becoming less vulnerable to flaws in thinking.

And war isn't just about being shot or not shot with bullets. It's about being destroyed or not destroyed, through the exploitation of weaknesses. And a great deal of rationality, on this very site, is about how to not be destroyed by our inherent weaknesses.

A rationalist, aware of these vulnerabilities and wishing to destroy a non-rationalist, can directly apply their rationality to produce weapons that exploit the weaknesses of a non-rationalist. Their propaganda, to a non-rationalist, can be dangerous, and the techniques used to craft it nigh-undetectable to the untrained eye. Weapons the enemy doesn't even know are weapons, until long after they begin murdering themselves because of those weapons.

An easy example would be to start an underground, pacifistic religion in the Barbarian nation. Since the barbarians shoot everyone discovered to profess it, every effort to propagate the faith is directly equivalent to killing the enemy (not just that, but even efforts to promote paranoia about the faith also weaken enemy capability!). And what defense do they have, save for other non-rationalist techniques that dark side rationality is empowered to destroy through clever arguments, created through superior understanding?

And we don't have to wait for a Perfect Future Rationalist to get those things either. We have those weapons right now.

Comment by indon on Your Price for Joining · 2013-05-15T21:46:39.353Z · score: 0 (0 votes) · LW · GW

Speaking as a cat, there are a lot of people who would like to herd me. What makes your project higher-priority than everyone else's?

"Yes, but why bother half-ass involvement in my group?" Because I'm still interested in your group. I'm just also interested in like 50 other groups, and that's on top of the one cause I actually prefer to specialize with.

...It seems to me that people in the atheist/libertarian/technophile/sf-fan/etcetera cluster often set their joining prices way way way too high.

People in the atheist/libertarian/technophile/sf-fan/etc cluster obviously have a ton of different interests, and those interests are time/energy exclusive. Why shouldn't they have high requirements for yet another interest trying to add itself to the cluster?

Comment by indon on Shut up and do the impossible! · 2013-04-28T15:49:43.876Z · score: 1 (1 votes) · LW · GW

Reading the article I can make a guess as to how the first challenges went; it sounds like their primary, and possibly only, resolution against the challenge was to not pay serious attention to the AI. That's not a very strong approach, as anyone in an internet discussion can tell you: it's easy to get sucked in and fully engaged in a discussion with someone trying to get you to engage, and it's easy to keep someone engaged when they're trying to break off.

Their lack of preparation, I would guess, led to their failure against the AI.

A more advanced tactic would involve additional lines of resolution after becoming engaged; contemplating philosophical arguments to use against the AI, for instance, or imagining an authority that forbids you from the action. Were I faced with the challenge, after I got engaged (which would take like 2 minutes max, I've got a bad case of "but someone's wrong on the internet!"), my second line of resolution would be to roleplay.

I would be a hapless, grad student technician whose job it is to feed the AI problems and write down the results. That role would have had a checklist of things not to do (because they would release or risk releasing the AI), and if directly asked to do any of them, he'd go 'talk to his boss', invoking the third line of defense.

Finally I'd be roleplaying someone with the authority to release the AI without being tricked, but he'd sit down at the console prepared, strongly suspecting that something was wrong, and empowered to at any time say "I'm shutting you down for maintenance". He wouldn't bother to engage the AI at its' level because he's trying to solve a deeper problem of which the AI's behavior is a symptom. That would make this line of defense the strongest of all, because he's no longer viewing the AI as credible or even intelligent as such; just a broken device that will need to be shut down and repaired after doing some basic diagnostic work.

But even though I feel confident I could beat the challenge, I think the first couple challenges already make the point; an AI-in-a-box scenario represents a psychological arms race and no matter how likely the humans' safeguards are to succeed, they only need to fail once. No amount of human victories (because only a single failure matters) or additional lines of human defense (which all have some, however small, chance to be overcome) can unmake that point.

It's strange, though. I did not think for one second that the problem was impossible on either side. I suppose, because it was used as an example of the opposite. Once something is demonstrated, it can hardly be impossible!

Comment by indon on Ethical Inhibitions · 2013-04-25T01:04:31.203Z · score: 0 (0 votes) · LW · GW

That paper seems to focus on raiding activities; if repeated raiding activities are difficult, then wouldn't that increase the utility of extermination warfare?

Indeed, the paper you cite posits that exactly that started happening:

The earliest conclusive archaeological evidence for attacks on settlements is a Nubian cemetery (site 117) near the present-day town of Jebel Sahaba in the Sudan dated at 12,000-14,000 B.P. (7, 12). War originated independently in other parts of the world at dates as late as 4,000 B.P. (13). Otterbein argues that agriculture was only able to develop initially at locations where ambushes, battles, and raids were absent (14).

And that such war predated agriculture.

I noted that humans are the only hominoid species alive.To the best of my admittedly limited archaeological knowledge, the others became extinct during the timeframe of the first two phases the paper describes; yet, if that were the case, wouldn't other hominoid communities have likely survived to see the total war phase of human development?

I would thus posit that total war is much older than even their existing data suggests.

Comment by indon on Ethical Inhibitions · 2013-04-25T00:49:26.256Z · score: 0 (0 votes) · LW · GW

Well, it's not conclusive evidence by any means, but I did note that we have no hominoid relatives; they're all extinct with a capital E. To me, that implies something more than just us being better at hunting-gathering.

And if we, as a species, did exterminate one or more other hominid species, then it seems a small leap of logic to conclude we did the same to each other whenever similar circumstances came up.

Comment by indon on Ethical Inhibitions · 2013-04-22T13:54:57.918Z · score: 6 (6 votes) · LW · GW

I don't get the flippant exclusion of group selection.

To the best of my knowledge, humans are the only species that has been exposed to continuous group selection events over hundreds of thousands of years, and I would argue that we've gotten very, very good at the activity as a result.

I'm referring, of course, to war. The kind that ends in extermination. The kind that, presumably, humans practiced better than all of our close hominid relatives who are all conspicuously extinct as of shortly after we spread throughout the world.

This is why I'm not much buying the 'tribes don't die often enough to allow group selection to kick in' argument - obviously, a whole lot of tribes are quite dead, almost certainly at the hands of humans. Even if the tribe death-rate right now is not that high, the deaths of entire hominid species to homo sapiens implies that it has been high in the past. And even with a low tribe death rate, replace 'tribe-murder' with 'tribe-rape' and you still have a modest group selection pressure.

So I don't know why you're talking about the impact of individual evolution in morality. Any prospective species whose morality was guided primarily by individual concerns, rather than the humans-will-rape-and-or-murder-us group concerns, probably got raped-and-or-murdered by a tribe of humans, the species we know to be the most efficient group killing machines on earth.

Under this paradigm - the one where we analyze human psychology as something that made us efficient tribe-murderers - sociopathy makes sense. Indeed, it's something I would argue we all have, with a complicated triggering system built to try to distinguish friend from foe. Full-on sociopathy would probably be to our war-sociopathy as sickle-cell anemia is to malaria resistance; a rare malfunction of a useful trait ('useful' in the evolutionary sense of 'we tribe-murdered all the hominids that didn't have it'). And that's not counting sociopaths who are that way because they simply got so confused that their triggering system broke, no genetics required.

We can't give our senses of honor or altruism a free pass in this analysis, either. If our universal sociopathy is war-sociopathy, then our universal virtue is peace-virtue, also dictated by trigger mechanisms. What we describe as virtue and the lack of it co-evolved in an environment where we used virtue in-group, and outright predation out-group. Groups that were good at both succeeded. Groups that failed at the first were presumably insufficiently efficient at group murder to survive. Groups that failed at the second were murdered by groups good at the second.

Practically the only individual adaptation I can see in that situation is the ability to submit to being conquered, or any other non-fatal-and-you-still-get-to-breed form of humiliation, which might mean you survive while they kill the rest of your tribe. But too much of even that will reduce in-group cohesion: A tribe can only take so many prisoners of a species whose members can express (and as you argue in belief-in-belief, even internalize) the opposite of their actual beliefs, such as "I don't want to murder you in your sleep as vengance for killing my tribe and enslaving me".

Comment by indon on The Dilemma: Science or Bayes? · 2013-04-21T15:34:04.459Z · score: 0 (0 votes) · LW · GW

I think that's a workable description of the process, but using that you still have the mathematical tendency to appreciate elegance, which, on this process model, doesn't seem like it's in the same place as the "mathematical method" proper - since elegance becomes a concern only after things are proven.

You could argue that elegance is informal, and that this aspect of the "Science V Bayes" argument is all about trying to formalize theory elegance (in which case, it could do so across the entire process), and I think that'd be fair, but it's not like elegance isn't a concept already in science, though one preexisting such that it was simply not made an "official" part of "Science".

So to try to frame this in the context of my original point, those quantum theorists who ignore an argument regarding elegance don't strike me as being scientists limited by the bounds of their field, but scientists being human and ignoring the guidelines of their field when convenient for their biases - it's not like a quantum physicist isn't going to know enough math to understand how arguments regarding elegance work.

Comment by indon on The Dilemma: Science or Bayes? · 2013-04-21T15:22:12.774Z · score: 0 (0 votes) · LW · GW

I wasn't comparing scientists running a simulation with mathematicians running a simulation. I was comparing scientists collecting evidence that might disprove their theories with mathematicians running a simulation - because such a simulation collects data that might disprove their conjectures.

What is an example of something a scientist can prove with 'just logic'?

We'll need to agree on a subject who is a scientist and not a mathematician. The easiest example for me would be to use a computer scientist, but you may argue that whenever a computer scientist uses logic they're actually functioning as a mathematician, in which case the dispute comes down to 'what's a mathematician'.

In the event you don't dispute, I'd note that a lot of computer science has involved logic regarding, for instance, the nature of computation.

In the event you do dispute the status of a computer science as science, then we still have an example of scientists performing mathematics when possible, and really physicists do that too (the quantum formulas that don't mean anything are a fine example, I think). So, to go back to my original point, it's not like an accusation of non-elegance has to come from nowhere; those physicists are undeniably practicing math, and elegance is important there.

Comment by indon on The Dilemma: Science or Bayes? · 2013-04-18T22:42:37.509Z · score: -1 (1 votes) · LW · GW

If that's the case, and if it is also the case that scientists prefer to use proofs and logic where available (I can admittedly only speak for myself, for whom the claim is true), then I would argue that all scientists are necessarily also mathematicians (that is to say, they practice mathematics).

And, if it is the case that mathematicians can be forced to seek inherently weaker evidence when proofs are not readily available, then I would argue that all mathematicians are necessarily also scientists (they practice science).

At that point, it seems like duplication of work to call what mathematicians and scientists do different things. Rather, they execute the same methodology on usually different subject matter (and, mind you, behave identically when given the same subject matter). You don't have to call that methodology "the scientific method", but what are you gonna call it otherwise?

Comment by indon on The Dilemma: Science or Bayes? · 2013-04-18T22:27:14.793Z · score: 0 (0 votes) · LW · GW

Your claim leads me back to my earlier statement.

If I could show you an example of mathematicians running ongoing computer simulations in order to test theories (well. Test conjectures for progressively higher values), would that demonstrate otherwise to you?

Because as Kindly notes, this happens. Mathematicians do sometimes reach for mere necessary-but-not-sufficient evidence for their claims, rather than proof. But obviously, they don't do so when proof is more accessible - and usually, because of the subject matter mathematicians work with, it is.

Comment by indon on The Dilemma: Science or Bayes? · 2013-04-18T19:26:56.109Z · score: 0 (0 votes) · LW · GW

I feel this reply I made captures the link between proof, evidence, and elegance, in both scientific and mathematical fields.

That is to say, where proof is equivalent for two mutually exclusive theories (because sometimes things are proven logically outside mathematics, and not everything in mathematics are proven), evidence is used as a tiebreaker.

And where evidence is equivalent for two mutually exclusive theories (requiring of course that proof also be equivalent), elegance is used as a tiebreaker.

Comment by indon on The Dilemma: Science or Bayes? · 2013-04-18T19:17:15.980Z · score: 1 (1 votes) · LW · GW

You go back and prove it if you can - and are mathematicians special in that regard, save that they deal with concepts more easily proven than most? When scientists in any field can prove something with just logic, they do. Evidence is the tiebreaker for exclusive, equivalently proven theories, and elegance the tiebreaker for exclusive, equivalently evident theories.

And that seems true for all fields labeled either a science or a form of mathematics.

Comment by indon on The Dilemma: Science or Bayes? · 2013-04-18T03:49:20.727Z · score: -1 (1 votes) · LW · GW

If I could show you an example of mathematicians running ongoing computer simulations in order to test theories (well. Test conjectures for progressively higher values), would that demonstrate otherwise to you?

And it's not as if proofs and logic are not employed in other fields when the option is available. Isn't the link between physics and mathematics a long-standing one, and many of the predictions of quantum theory produced on paper before they were tested?

Comment by indon on The Dilemma: Science or Bayes? · 2013-04-18T02:34:41.847Z · score: 0 (0 votes) · LW · GW

That's nice, but how does the Mathematical Method differ from the scientific one?

What differing insights do the 'Math Goggles' offer, as it were.

Comment by indon on The Dilemma: Science or Bayes? · 2013-04-18T01:56:48.071Z · score: 0 (2 votes) · LW · GW

I would argue that there are plenty of fields of science in which elegance is considered important.

Most prominently, mathematics. Mathematicians do run experiments, courtesy of computers, and it is the very field physics must so closely rely on. If mathematicians do not practice the scientific method, what the heck do they do?

Comment by indon on Timeless Identity · 2013-04-17T23:28:21.335Z · score: 2 (2 votes) · LW · GW

Since you're a computer guy (and I imagine many people you talk to are also computer-savvy), I'm surprised you don't use file/process analogues for identity.

  • If I move a file's physical location on my hard drive, it's obviously still the same file, because it has handle and data continuity. This is analogous to existing in different locations, being expressed with different atoms.
  • If I change the content of the file, it's obviously still the same file, because it has handle and location continuity. This is analogous to changing over not-technically-time-but-causal-effect-chains-that-we-may-as-well-call-time-for-convenience.
  • If I delete the file (actually just removing its' file handle in most modern systems) and use a utility to recover it, it's obviously still the same file, because it has location and data continuity. This is analogous to cryonics.

Identity is thus describable with three components: handle, data, and location continuity, only two of which are required at any given point. As for having just one:

  • If you have only handle continuity, you have two distinct objects with the same name.
  • If you have only data continuity, then you have duplicate work.
  • If you have only location continuity, you've reformatted.

All three break file identity.

As for cryonics, I would sign up if I could be convinced that I would not become obsolete or even detrimental to a society that resurrects me. And looking at some of the problems in my country already being caused by merely having a normally aging population at current social development rate, I don't even think it's a given that I could contribute meaningfully to society during the twilight of my at-present-natural life.

Comment by indon on Moral Complexities · 2013-04-08T20:41:07.797Z · score: 0 (0 votes) · LW · GW

Morality-as-preference, I would argue, is oriented around the use of morality as a tool of manipulation of other moral actors.

Question one: "It is right that I should get the pie" is more convincing, because people respond to moral arguments. Why they do so is irrelevant to the purpose (to use morality to get what you want).

Question two: People don't change their terminal values (which I would argue are largely unconscious, emotional parameters), though they might change how they attempt to achieve them, or one terminal value might override a different one based on mood-affecting-circumstance ("I am hungry, therefore my food-seeking terminal value has priority"). This, btw, answers why it is less morally wrong for a starving man to steal to eat versus a non-starving man.

Question three: "I want this, though I know it's wrong" under this view maps to "I want this, and have no rhetoric with which I can convince anyone to let me have it." This might even include the individual themselves, because people can criticize their own decisions as if they were separate actors, even to the point where they must convince a constructed 'moral actor' that has their own distinct motives, using moral arguments, before permitting themselves to engage in an action.

Comment by indon on Don't Believe You'll Self-Deceive · 2013-04-03T20:04:26.433Z · score: 2 (2 votes) · LW · GW

I find it amusing that in this article, you are advocating the use of deliberate self-deception in order to ward yourself against later deliberate self-deception.

That said, I feel the urge to contribute despite the large time-gap, and I suspect that even if later posts revisit this concept, the relevance to my contribution will be lower.

"I believe X" is a statement of self-identity - the map of the territory of your mind. But as maps and territories go, self-identity is pretty special, as it is a map written using the territory, and changes in the map can affect the territory as a result - though not necessarily in the exactly intended fashion. So even if deliberate self-deception isn't possible, then some approximation of it probably is.

Moreover, I'd like to question the definition of 'belief' in the context. If we place an emphasis, in the concept, of a belief as something that affects one's actions, then there is such a thing as a false belief that someone holds: that is to say, an assumption someone intentionally makes, regardless of its' truth or falsehood, that they use to guide their behavior for external reasons.

That is to say, acting, or role-playing.

I'm rather a believer in cognitive minimalism - that our brains are very uncomplex. So I would assert that the same system that we use to model others' behavior - or to play others' roles - we use for our own self-identity. So when you say, "I believe X", you're effectively saying, "I act as if X is true". And if we use the same system to act like ourselves, to model our own behavior, as we do to model or act like anyone else, then that's most of what the practical impact of a belief is.

What I'm trying to say is that the only difference between acting a certain way and believing a certain thing is that you only do the acting under certain practical conditions - the belief, insofar as a belief is different from an act, is acting in a certain way all the time, for any reason.

Replace "I believe X because..." with "I act as if X is true because..." and I don't think it's confusing anymore. Self-identity modification as a tool is pretty important to human cognition, not just for trying to convince yourself that what you don't think is true, is.

Edit: Actually, I want to amend that last part now that I think on it. I would assert that there is no difference whatsoever; that all reasonable beliefs are contingent. In fact, a big part of acting rationally is about making your beliefs contingent on the truth or falsehood of the object of the belief. Beliefs that aren't based on accuracy are still contingent, just on things like, "This is beneficial to me in some way." And really, a rational belief is similar, it just goes, "I believe X because it is accurate," with the implied addition, "and accuracy is good to have in a belief," so that boils down to a practical reason as well.