Posts
Comments
Brandon Reinhart: Jack Thompson. (Fortunately, he's been disbarred, now, so maybe that particular vein of stupidity is getting tapped out.)
Will Pearson: are you suggesting that the simplest algorithm for intelligence is too large to fit in human memory?
Dude. Dude. No wonder you've been so emphatic in your denunciations of mysterious answers to mysterious questions.
Regarding the first reply here (a year later...): perhaps there is another problem visible here, the problem of when advice is too plain. The story advises in a fashion so transparently evident that even SHRDLU could get it: the poor student quite literally wasn't looking at anything, so Pirsig/Phædrus gave her a topic so mundane that she had to go down and see for herself. If Zen and the Art of Motorcycle Maintenance were a math textbook, the rule would be clear: "if you examine something, you will have something to say about it." But because writing is a mysterious art, it is assumed that the moral of a story about writing must be mysterious as well.
(Oddly, I never fell prey to this with the visual arts. I thank whoever told me about the negative-space/outline trick - that worked so well that I cached "drawing is seeing" instead.)
You're right that he should be able to engage standard critiques, Zubon, but if my (negligible) experience with the philosophy of free will is any indication, many "standard critiques" are merely exercises in wooly thinking. It's reasonable for him to step back and say, "I don't have time to deal with this sort of thing."
Wow, there are a lot of nihilists here.
I answered on my own blog, but I guess I'm sort of with dloye at 08:54: I'd try to keep the proof a secret, just because it feels like it would be devastating to a lot of people.
Robin Hanson: I don't think that's what he's getting at. Yes, surface similarities are correlated with structural similarities, or mathematical similarities (I know of a guy who found a couple big research papers towards his astrophysics PhD via a colleage's analogy between gravitational and electromagnetic waves), but they show up so often under other circumstances that it is meet to be suspicious of them. The outside view works really well for Christmas shopping, essay writing, program development, and the like because it is obvious that the structural similarities are present.
Joseph Knecht: When you say that you haven't seen evidence that puts "soul" on shaky grounds, [...]
Sorry, poor wording - please substitute "but nor have I seen evidence against 'choice' of the kind which puts 'soul' on shaky grounds." I am familiar with many of the neurological arguments against souls - I mentioned the concept because I am not familiar with any comparable evidence regarding choice. (Yes, I have heard of the experiments which show nervous impulses towards an action prior to the time when the actor thought they decided. That's interesting, but it's no Phineas Gage.)
Joseph Knecht: It is a clash of intuitions, then? I freely admit that I have seen no such account either, but nor have I seen the kind of evidence which puts "soul" on shaky grounds. And "fire" is comparably ancient to "soul", and still exists.
In fact, "fire" even suggests an intermediate position between yours and that which you reject: chemically, oxidation reactions like that of fire show up all over the place, and show that the boundary between "fire" and "not fire" is far from distinct. Would it be surprising were it found that the boundary between the prototypical human choices Eliezer names and your not-choices is blurry in a similar fashion?
Kip Werking, I can see where you're coming from, but "free will" isn't just some attempt to escape fatalism. Look at Eliezer's post: something we recognize as "free will" appears whenever we undergo introspection, for example. Or look at legal cases: acts are prosecuted entirely differently if they are not done of one's "free will", contracts are annulled if the signatories did not sign of their own "free will". We praise good deeds and deplore evil deeds that are done of one's own "free will". Annihilation of free will requires rebuilding all of these again from their very foundations - why do so, then, when one may be confident that a reasonable reading of the term exists?
Joseph Knecht: Why do you think that the brain would still be Eliezer's brain after that kind of change?
(Ah, it's so relaxing to be able to say that. In the free will class, they would have replied, "Mate, that's the philosophy of identity - you have to answer to the ten thousand dudes over there if you want to try that.")
Andy Wood: So, while I highly doubt that CC is equivalent to my view in the first place, I'm still curious about what view you adopted to replace it.
I suspect (nay, know) my answer is still in flux, but it's actually fairly similar to classical compatibilism - a person chooses of their own free will if they choose by a sufficiently-reasonable process and if other sufficiently-reasonable processes could have supported different choices. However, following the example of Angela Smith (an Associate Professor of Philosophy at the University of Washington), I hold that free will is not required for responsibility. After all, is it not reasonable to hold someone responsible for forgetting an important duty?
Hmm, it seems my class on free will may actually be useful.
Eliezer: you may be interested to know that your position corresponds almost precisely to what we call classical compatibilism. I was likewise a classical compatibilist before taking my course - under ordinary circumstances, it is quite a simple and satisfactory theory. (It could be your version is substantially more robust than the one I abandoned, of course. For one, you would probably avoid the usual trap of declaring that agents are responsible for acts if and only if the acts proceed from their free will.)
Hopefully Anonymous: Are you using Eliezer's definition of "could", here? Remember, Eliezer is saying "John could jump off the cliff" means "If John wanted, John would jump off the cliff" - it's a counterfactual. If you reject this definition as a possible source of free will, you should do so explicitly.
This is the limit of Eld science, and hence, the limit of public knowledge. Wait, so these people are doing this only for recreation?
No - this is Eliezer's alternate universe storyline in which the science-equivalent is treated as a secret the same way the Pythagoreans did. The initiates - the people with access to the secret knowledge - use it for technology, just as we do, except because the general public doesn't know the science, the tech looks amazing.
The idea, I believe, is to reduce the attraction of bogus secret societies. In Brennan's world, anyone who made one would be challenged to accomplish as great or greater feats than the Bayesians - a task that a mere mystery cult would fail at.
Richard Kennaway: I don't think we actually disagree about this. It's entirely possible that doubling the N of a brain - whatever the relevant N would be, I don't know, but we can double it - would mean taking up much more than twice as many processor cycles (how fast do neurons run?) to run the same amount of processing.
In fact, if it's exponential, the speed would drop by orders of magnitude for every constant increase. That would kill superintelligent AI as effectively as the laws of thermodynamics killed perpetual motion machines.
On the other hand, if you believe Richard Dawkins, Anatole France's brain was less that 1000 cc, and brains bigger than 2000 cc aren't unheard of (he lists Oliver Cromwell as an unverified potential example). Even if people are exchanging metaphorical clock rate for metaphorical instruction set size and vice-versa, and even if people have different neuron densities, this would seem to suggest the algorithm isn't particularly high-order, or if it is the high-order bottlenecks haven't kicked in at our current scale.
Richard Kennaway: I don't know what you mean - the subset-sum problem is NP-hard (and NP-complete) and the best known algorithms can - given infinite resources - be run on lists of any size with speed O(2^(N/2) N). It scales - it can be run on bigger sets - even if it is impractical to. Likewise, the traveling salesman problem can be solved in O(N^2 2^N). What I'm asking is if there are any problems where we can't change N. I can't conceive of any.
The Turing test doesn't look for intelligence. It looks for 'personhood' - and it's not even a definitive test, merely an application of the point that something that can fool us into thinking its a person is due the same regard we give people.
I said the Turing test was weak - in fact, I linked an entire essay dedicated to describing exactly why the Turing test was weak. In fact, I did so entirely to accent your point that we don't know what we're looking for. What we are looking for, however, is, by the Church-Turing thesis, an algorithm, an information-processing algorithm, and I invite the computer scientists et al. here to name any known information-processing algorithm which doesn't scale.
I'm not denying your point, Caledonian - right now, our best conception of a test for smarts in the sense we want is the Turing test, and the Turing test is pretty poor. If we actually understood intelligence, we could answer your questions. But as long as we're all being physicalists, here, we're obliged to believe that the human brain is a computing machine - special purpose, massively parallel, but almost certainly Turing-complete and no more. And by analogy with the computing machines we should expect to be able to scale the algorithm to bigger problems.
I'm not saying it's practical. It could be the obvious scalings would be like scaling the Bogosort. But it would seem to be special pleading to claim it was impossible in theory.
I have to admit to some skepticism as well, Caledonian, but it seems clear to me that it should be possible with P > .99 to make an AI which is much smarter but slower than a human brain. And even if increasing the effective intelligence goes as O(exp(N)) or worse, a Manhattan-project-style parallel-brains-in-cooperation AI is still not ruled out.
Oddly enough, Lincoln didn't actually say exactly that. A minor distinction, true, but there it is.
Not replying to the comment thread: I think the quote might actually be Deuteronomy 13:6-10 in the King James Version.
Oh, that's subtle.
Check me if I'm wrong: according to the MWI, the evolving waveform itself can include instantiations of human beings, just as an evolving Conway's Life grid can include gliders. Thus, if we're proposing that humans exist (a reasonable hypothesis), they exist in the waveform, and if the Bohmian particles do not influence the evolution of the waveform, they exist in the waveform the same way whether or not Bohm's particles are there. And, in fact, if they do not influence the amplitude distribution, they're epiphenomenal in the same sense that people like Chalmers claim consciousness is.
If the particles do influence the evolution of the amplitude distribution, everything changes (of course). But that remains to be shown.
Having quantum collapses IS having Many Worlds... unless and until you can demonstrate that the two are different in some way.
...
03:16 was me - curse you, Typepad!
Correct me if I am wrong, but MWI does have noticeable consequences, or at least implications: for example, interference at all length-scales and proper evaluation of the waveform equations implying the Born probabilities. Neither of these are implicit in the Copenhagen interpretation - in fact, the first is contradicted.
...wait, the collapse postulate doesn't suggest different results? In order for collapse to occur, the amplitude-summing effect we see at the level of particles would have to vanish at some point. Which implies that above that point, "interference" effects will vanish.
We might have a hard time running the experiment, but that sounds like a different result to me.
Unknown, I don't think Egan's Law has anything to do with facing reality. If I read it correctly, Egan is saying that any theory (e.g. quantum mechanics, general relativity, the standard model) ought to predict normal events on the level of normal events. If relativity predicted that a ball dropped from a height of 4.9 meters would take 5.3 seconds to hit the ground, relativity would be disproven. It all must add up to normality.
Robin Z: The motivation for suspecting that something funny happens as you try scale up decoherance to full blown many-worlds comes from the serious problems that many-worlds has. Beyond the issue with predicting the Born postulate, there are serious conceptual problems with defining individual worlds, even emergently.
Enough said - I withdraw my implied objection. I, too, hope the experiment you refer to will provide new insight.
Is there any reason to believe that something interferes with the physics between "microscopic decoherence" and "macroscopic decoherence" that affects the latter and not the former? I'm just saying because I'm getting strong echoes of the "microevolution vs. macroevolution" misconception - in both cases, people seem to be rejecting the obvious extension of a hypothesis to the human level.