Posts

Comments

Comment by Gray_Area on War and/or Peace (2/8) · 2009-01-31T16:39:27.000Z · LW · GW

For what it's worth, I find plenty to disagree with Eleazar about, on points of both style and substance, but on death I think he has it exactly right. Death is a really bad thing, and while humans have diverse psychological adaptations for dealing with death, it seems the burden of proof is on people who do NOT want to make the really bad thing go away in the most expedient way possible.

Comment by Gray_Area on That Alien Message · 2008-05-23T07:50:00.000Z · LW · GW

This is an amusing empirical test for zombiehood -- do you agree with Daniel Dennett?

Comment by Gray_Area on Changing the Definition of Science · 2008-05-20T18:54:19.000Z · LW · GW

"The idea that Bayesian decision theory being descriptive of the scientific process is very beautifully detailed in classics like Pearl's book, Causality, in a way that a blog or magazine article cannot so easily convey."

I wish people would stop bringing up this book to support arbitrary points, like people used to bring up the Bible. There's barely any mention of decision theory in Causality, let alone an argument for Bayesian decision theory being descriptive of all scientific process (although Pearl clearly does talk about decisions being modeled as interventions).

Comment by Gray_Area on Changing the Definition of Science · 2008-05-20T01:31:45.000Z · LW · GW

"Would you care to try to apply that theory to Einstein's invention of General Relativity? PAC-learning theorems only work relative to a fixed model class about which we have no other information."

PAC-learning stuff is, if anything far easier than general scientific induction. So should the latter require more samples or less?

Comment by Gray_Area on Changing the Definition of Science · 2008-05-19T22:26:55.000Z · LW · GW

"Eliezer is almost certainly wrong about what a hyper-rational AI could determine from a limited set of observations."

Eliezer is being silly. People invented computational learning theory, which among other things, shows the minimum number of samples needed to recover a given error rate.

Comment by Gray_Area on Science Doesn't Trust Your Rationality · 2008-05-14T07:28:50.000Z · LW · GW

Eliezer, why are you concerned with untestable questions?

Comment by Gray_Area on Joint Configurations · 2008-04-12T20:25:36.000Z · LW · GW

Richard: Cox's theorem is an example of a particular kind of result in math, where you have some particular object in mind to represent something, and you come up with very plausible, very general axioms that you want this representation to satisfy, and then prove this object is unique in satisfying these. There are equivalent results for entropy in information theory. The problem with these results, they are almost always based on hindsight, so a lot of the times you sneak in an axiom that only SEEMS plausible in hindsight. For instance, Cox's theorem states that plausibility is a real number. Why should it be a real number?

Comment by Gray_Area on Joint Configurations · 2008-04-11T08:06:54.000Z · LW · GW

"The probability of two events equals the probability of the first event plus the probability of the second event."

Mutually exclusive events.

It is interesting that you insist that beliefs ought to be represented by classical probability. Given that we can construct multiple kinds of probability theory, on what grounds should we prefer one over the other to represent what 'belief' ought to be?

Comment by Gray_Area on Trust in Bayes · 2008-01-30T09:52:15.000Z · LW · GW

"the real reason for the paradox is that it is completely impossible to pick a random integer from all integers using a uniform distribution: if you pick a random integer, on average lower integers must have a greater probability of being picked"

Isn't there a simple algorithm which samples uniformly from a list without knowing it's length? Keywords: 'reservoir sampling.'

Comment by Gray_Area on The Allais Paradox · 2008-01-19T12:50:08.000Z · LW · GW

People don't maximize expectations. Expectation-maximizing organisms -- if they ever existed -- died out long before rigid spines made of vertebrae came on the scene. The reason is simple, expectation maximization is not robust (outliers in the environment can cause large behavioral changes). This is as true now as it was before evolution invented intelligence and introspection.

If people's behavior doesn't agree with the axiom system, the fault may not be with them, perhaps they know something the mathematician doesn't.

Finally, the 'money pump' argument fails because you are changing the rules of the game. The original question was, I assume, asking whether you would play the game once, whereas you would presumably iterate the money pump until the pennies turn into millions. The problem, though, is if you asked people to make the original choices a million times, they would, correctly, maximize expectations. Because when you are talking about a million tries, expectations are the appropriate framework. When you are talking about 1 try, they are not.

Comment by Gray_Area on Infinite Certainty · 2008-01-09T12:29:30.000Z · LW · GW

Paul Gowder said:

"We can go even stronger than mathematical truths. How about the following statement?

~(P &~P)

I think it's safe to say that if anything is true, that statement (the flipping law of non-contradiction) is true."

Amusingly, this is one of the more controversial tautologies to bring up. This is because constructivist mathematicians reject this statement.

Comment by Gray_Area on The Amazing Virgin Pregnancy · 2007-12-25T00:26:57.000Z · LW · GW

"Sometimes I can feel the world trying to strip me of my sense of humor."

If you are trying to be funny, the customer is always right, I am afraid. The post wasn't productive, in my opinion, and I have no emotional stake in Christianity at all (not born, not raised, not currently).

Comment by Gray_Area on Hug the Query · 2007-12-15T12:37:48.000Z · LW · GW

Eliezer, where do your strong claims about the causal structure of scientific discourse come from?

Comment by Gray_Area on The Hidden Complexity of Wishes · 2007-11-26T00:49:18.000Z · LW · GW

"As long as you're wishing, wouldn't you rather have a genie whose prior probabilities correspond to reality as accurately as possible?"

Such a genie might already exist.

Comment by Gray_Area on The Hidden Complexity of Wishes · 2007-11-25T11:52:12.000Z · LW · GW

Every computer programmer, indeed anybody who uses computers extensively has been surprised by computers. Despite being deterministic, a personal computer taken as a whole (hardware, operating system, software running on top of the operating system, network protocols creating the internet, etc. etc.) is too large for a single mind to understand. We have partial theories of how computers work, but of course partial theories sometimes fail and this produces surprise.

This is not a new development. I have only a partial theory of how my car works, but in the old days people only had a partial theory of how a horse works. Even a technology as simple and old as a knife still follows non-trivial physics and so can surprise us (can you predict when a given knife will shatter?). Ultimately, most objects, man-made or not are 'black boxes.'

Comment by Gray_Area on The Hidden Complexity of Wishes · 2007-11-25T00:34:06.000Z · LW · GW

"It seems contradictory to previous experience that humans should develop a technology with "black box" functionality, i.e. whose effects could not be foreseen and accurately controlled by the end-user."

Eric, have you ever been a computer programmer? That technology becomes more and more like a black box is not only in line with previous experience, but I dare say is a trend as technological complexity increases.

Comment by Gray_Area on The Hidden Complexity of Wishes · 2007-11-24T12:38:05.000Z · LW · GW

On further reflection, the wish as expressed by Nick Tarleton above sounds dangerous, because all human morality may either be inconsistent in some sense, or 'naive' (failing to account for important aspects of reality we aren't aware of yet). Human morality changes as our technology and understanding changes, sometimes significantly. There is no reason to believe this trend will stop. I am afraid (genuine fear, not figure of speech) that the quest to properly formalize and generalize human morality for use by a 'friendly AI' is akin to properly formalizing and generalizing Ptolemean astronomy.

Comment by Gray_Area on The Hidden Complexity of Wishes · 2007-11-24T10:26:03.000Z · LW · GW

Sounds like we need to formalize human morality first, otherwise you aren't guaranteed consistency. Of course formalizing human morality seems like a hopeless project. Maybe we can ask an AI for help!

Comment by Gray_Area on Artificial Addition · 2007-11-20T11:15:24.000Z · LW · GW

Well shooting randomly is perhaps a bad idea, but I think the best we can do is shoot systematically, which is hardly better (takes exponentially many bullets). So you either have to be lucky, or hope the target isn't very far, so you don't need to a wide cone to take pot shots at, or hope P=NP.

Comment by Gray_Area on Evolutionary Psychology · 2007-11-12T09:11:20.000Z · LW · GW

billswift said: "Prove it."

I am just saying 'being unpredictable' isn't the same as free will, which I think is pretty intuitive (most complex systems are unpredictable, but presumably very few people will grant them all free will). As far as the relationship between randomness and free will, that's clearly a large discussion with a large literature, but again it's not clear what the relationship is, and there is room for a lot of strange explanations. For example some panpsychists might argue that 'free will' is the primitive notion, and randomness is just an effect, not the other way around.

Comment by Gray_Area on Evolutionary Psychology · 2007-11-11T23:27:54.000Z · LW · GW

Tom McGabe: "Evolution sure as heck never designed people to make condoms and birth control pills, so why can't a computer do things we never designed it to do?"

That's merely unpredictability/non-determinism, which is not necessarily the same as free will.

Comment by Gray_Area on Fake Optimization Criteria · 2007-11-11T23:13:20.000Z · LW · GW

Stefan Pernar said: "I argue that morality can be universally defined."

As Eliezer points out, evolution is blind, and so 'fitness' can have as a side-effect what we would intuitively consider unimaginable moral horrors (much worse than parasitic wasps and cats playing with their food). I think if you want to define 'the Good' in the way you do, you need to either explain how such horrors are to be avoided, or educate the common intuition.

Comment by Gray_Area on Fake Selfishness · 2007-11-08T07:15:21.000Z · LW · GW

Stephen: the altruist can ask the Genie the same thing as the selfish person. In some sense, though, I think these sorts of wishes are 'cheating,' because you are shifting the computational/formalization burden from the wisher to the wishee. (Sorry for the thread derail.)

Comment by Gray_Area on Fake Selfishness · 2007-11-08T05:10:34.000Z · LW · GW

"My definition of an intelligent person is slowly becoming 'someone who agrees with Eliezer', so that's all right."

That's not in the spirit of this blog. Status is the enemy, only facts are important.

Comment by Gray_Area on Natural Selection's Speed Limit and Complexity Bound · 2007-11-05T05:29:24.000Z · LW · GW

Scott said: "25MB is enough for pretty much anything!"

Have people tried to measure the complexity of the 'interpreter' for the 25MB of 'tape' of DNA? Replication machinery is pretty complicated, possibly much more so than any genome.

Comment by Gray_Area on Motivated Stopping and Motivated Continuation · 2007-10-29T03:26:39.000Z · LW · GW

Eliezer, are you familiar with Russell and Wefald's book "Do the Right Thing"?

It's fairly old (1991), but it's a good example of how people in AI view limited rationality.

Comment by Gray_Area on Expecting Short Inferential Distances · 2007-10-23T09:16:41.000Z · LW · GW

This reminds me of teaching. I think good teachers understand short inferential distances at least intuitively if not explicitly. The 'shortness' of inference is why good teaching must be interactive.

Comment by Gray_Area on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2007-10-20T06:09:42.000Z · LW · GW

Pascal's wager type arguments fail due to their symmetry (which is preserved in finite cases).

Comment by Gray_Area on Hold Off On Proposing Solutions · 2007-10-17T04:40:23.000Z · LW · GW

What circles do you run in Eliezer? I meet a fair number of people who work in AI, (you can say I "work in AI" myself) and so far I can't think of a single person who was sure of a way to build general intelligence. Is this attitude you observe a common one among people who aren't actually doing AI research, but who think about AI?

Comment by Gray_Area on The Logical Fallacy of Generalization from Fictional Evidence · 2007-10-16T08:14:55.000Z · LW · GW

Apparently what works fairly well in Go is to evaluate positions based on 'randomly' running lots games to completion (in other words you evaluate a position as 'good' if in lots of random games which start from this position you win). Random sampling of the future can work in some domains. I wonder if this method is applicable to answering specific questions about the future (though naturally I don't think science fiction novels are a good sampling method).

Comment by Gray_Area on Original Seeing · 2007-10-14T06:32:41.000Z · LW · GW

Watching myself trying to write (or speak), I am coming to realize what a horrendous hack the language processes of the brain are. It is sobering to contemplate what sorts of noise and bias this introduces to our attempts to think and communicate.

Comment by Gray_Area on A Priori · 2007-10-09T04:37:18.000Z · LW · GW

I think a discussion of what people mean exactly when they invoke Occam's Razor would be great, though it's probably a large enough topic to deserve its own thread.

The notion of hypothesis parsimony is, I think, a very subtle one. For example, Nick Tarleton above claimed that 'causal closure' is 'the most parsimonious hypothesis.' At some other point, Eliezer claimed the multi-world interpretation of quantum mechanics as the most parsimonious. This isn't obvious! How is parsimony measured? Would some version of Chalmers' dualism really be less parsimonious? How will we agree on a procedure to compare 'hypothesis size?' How much should we value 'God' vs 'the anthropic landscape' favored at Stanford?

Comment by Gray_Area on A Priori · 2007-10-09T01:45:45.000Z · LW · GW

"I'm aware that physical outputs are totally determined by physical inputs."

Even this is far from a settled matter, since I think this implies both determinism and causal closure.

Comment by Gray_Area on A Priori · 2007-10-08T23:16:37.000Z · LW · GW

I don't really understand what Eliezer is arguing against. Clearly he understands the value of mathematics, and clearly he understands the difference between induction and deduction. He seems to be arguing that deduction is a kind of induction, but that doesn't make much sense to me.

Nick: you can construct a model where there is a notion of 'natural number' and a notion of 'plus' except this plus happens to act 'oddly' when applied to 2 and 2. I don't think this model would be particularly interesting, but it could be made.

Comment by Gray_Area on Recommended Rationalist Reading · 2007-10-01T21:17:40.000Z · LW · GW

"Causality" by Judea Pearl is an excellent formal treatment of the subject central to empirical science.

Comment by Gray_Area on How to Convince Me That 2 + 2 = 3 · 2007-09-29T09:25:35.000Z · LW · GW

Perhaps 'a priori' and 'a posteriori' are too loaded with historic context. Eliezer seems to associate a priori with dualism, an association which I don't think is necessary. The important distinction is the process by which you arrive at claims. Scientists use two such processes: induction and deduction.

Deduction is reasoning from premises using 'agreed upon' rules of inference such as modus ponens. We call (conditional) claims which are arrived at via deduction 'a priori.'

Induction is updating beliefs from evidence using rules of probability (Bayes theorem, etc). We call (conditional) claims which are arrived at via induction 'a posteriori.'

Note: both the rules of inference used in deduction and rules of evidence aggregation used in induction are agreed upon as an empirical matter because it has been observed that we get useful results using these particular rules and not others.

Furthermore: both deduction and induction happen only (as far as we know) in the physical world.

Furthermore: deductive claims by themselves are 'sterile,' and making them useful immediately entails coating them with a posteriori claims.

Nevertheless, there is a clear algorithmic distinction between deduction and induction, a distinction which is mirrored in the claims obtained from these two processes.

Comment by Gray_Area on How to Convince Me That 2 + 2 = 3 · 2007-09-28T20:56:56.000Z · LW · GW

"It appears to me that "a priori" is a semantic stopsign; its only visible meaning is "Don't ask!""

No, a priori reasoning is what mathematicians do for a living. Despite operating entirely by means of semantic stopsigns, mathematics seems nevertheless to enjoy rude health.

Comment by Gray_Area on How to Convince Me That 2 + 2 = 3 · 2007-09-28T07:42:06.000Z · LW · GW

Eliezer: I am using the standard definition of 'a priori' due to Kant. Given your responses, I conclude that either you don't believe a priori claims exist (in other words you don't believe deduction is a valid form of reasoning), or you mean by arithmetic statements "2+2=4" something other than what most mathematicians mean by them.

Comment by Gray_Area on How to Convince Me That 2 + 2 = 3 · 2007-09-28T03:41:55.000Z · LW · GW

Eliezer: "Gray Area, if number theory isn't in the physical universe, how does my physical brain become entangled with it?"

I am not making claims about other universes. In particular I am not asserting platonic idealism is true. All I am saying is "2+2=4" is an a priori claim and you don't use rules for incorporating evidence for such claims, as you seemed to imply in your original post.

A priori reasoning does take place inside the brain, and neuroscientists do use a posteriori reasoning to associate physical events in the brain with a priori reasoning. Despite this, a priori claims exist and have their own rules for establishing truth.

Comment by Gray_Area on How to Convince Me That 2 + 2 = 3 · 2007-09-28T01:45:57.000Z · LW · GW

Eliezer: When you are experimenting with apples and earplugs you are indeed doing empirical science, but the claim you are trying to verify isn't "2+2=4" but "counting of physical things corresponds to counting with natural numbers." The latter is, indeed an empirical statement. The former is a statement about number theory, the truth of which is verified wrt some model (per Tarski's definition).

Comment by Gray_Area on How to Convince Me That 2 + 2 = 3 · 2007-09-28T00:21:09.000Z · LW · GW

The core issue is whether statements in number theory, and more generally, mathematical statements are independent of physical reality or entailed by our physical laws. (This question isn't as obvious as it might seem, I remember reading a paper claiming to construct a consistent set of physical laws where 2 + 2 has no definite answer). At any rate, if the former is true, 2+2=4 is outside the province of empirical science, and applying empirical reasoning to evaluate its 'truth' is wrong.

Comment by Gray_Area on What is Evidence? · 2007-09-22T09:15:37.000Z · LW · GW

Why not just say e is evidence for X if P(X) is not equal to P(X|e)?

Incidentally, I don't really see the difference between probabilistic dependence (as above) and entanglement. Entanglement is dependence in the quantum setting.

Comment by Gray_Area on "Science" as Curiosity-Stopper · 2007-09-05T04:34:09.000Z · LW · GW

Eliezer said: "These are blog posts, I've got to write them quickly to pump out one a day."

I am curious what motivated this goal.

Comment by Gray_Area on Say Not "Complexity" · 2007-08-29T22:29:22.000Z · LW · GW

In computer science there is a saying 'You don't understand something until you can program it.' This may be because programming is not forgiving to the kind of errors Eliezer is talking about. Interestingly, programmers often use the term 'magic' (or 'automagically') in precisely the same way Eliezer and his colleague did.

Comment by Gray_Area on The Futility of Emergence · 2007-08-28T19:15:23.000Z · LW · GW

Some other vague concepts people disagree on: 'cause,' 'intelligence,' 'mental state,' and so on.

I am a little suspicious of projects to 'exorcise' vague concepts from scientific discourse. I think scientists are engaged in a healthy enough enterprise that eventually they will be able to sort out the uselessly vague concepts from the 'vague because they haven't been adequately understood and defined yet'.

Comment by Gray_Area on The Futility of Emergence · 2007-08-27T21:39:45.000Z · LW · GW

I ll try a silly info-theoretic description of emergence:

Let K(.) be Kolmogorov complexity. Assume you have a system M consisting of and fully determined by n small identical parts C. Then M is 'emergent' if M can be well approximated by an object M' such that K(M') << n*K(C).

The particulars of the definition aren't even important. What's important is this is (or can be) a mathematical, rather than a scientific definition, something like the definition of derivative. Mathematical concepts seem more about description, representation, and modeling than about prediction, and falsifiability. Mathematical concepts may not increase our ability to predict directly, but they do indirectly as they form a part in larger scientific predictions. Derivatives don't predict anything themselves, but many physical laws are stated in terms of derivatives.

Comment by Gray_Area on Fake Causality · 2007-08-24T10:58:43.000Z · LW · GW

Robin Hanson said: "Actually, Pearl's algorithm only works for a tree of cause/effects. For non-trees it is provably hard, and it remains an open question how best to update. I actually need a good non-tree method without predictable errors for combinatorial market scoring rules."

To be even more precise, Pearl's belief propagation algorithm works for the so-called 'poly-tree graphs,' which are directed acyclic graphs without undirected cycles (e.g., cycles which show up if you drop directionality). The state of the art for exact inference in bayesian networks are various junction tree based algorithms (essentially you run an algorithm similar to belief propagation on a graph where you force cycles out by merging nodes). For large intractable networks people resort to approximating what they are interested in by sampling. Of course there are lots of approaches to this problem: bayesian network inference is a huge industry.

Comment by Gray_Area on Science as Attire · 2007-08-23T10:35:12.000Z · LW · GW

Eliezer said: "I encounter people who are quite willing to entertain the notion of dumber-than-human Artificial Intelligence, or even mildly smarter-than-human Artificial Intelligence. Introduce the notion of strongly superhuman Artificial Intelligence, and they'll suddenly decide it's "pseudoscience"."

It may be that the notion of strongly superhuman AI runs into people's preconceptions they aren't willing to give up (possibly of religious origins). But I wonder if the 'Singularians' aren't suffering from a bias of their own. Our current understanding of science and intelligence is compatible with many non-Singularity outcomes:

(a) 'human-level' intelligence is, for various physical reasons, an approximate upper bound on intelligence (b) Scaling past 'human-level' intelligence is possible but difficult due to extremely poor returns (e.g., logarithmic rather than exponential growth past a certain point) (c) Scaling past 'human-level' intelligence is possible, is not difficult, but runs into an inherent 'glass ceiling' far below 'incomprehensibility' of the resulting intelligence

and so on

Many of these scenarios seem as interesting to me as a true Singularity outcome, but my perception is they aren't being given equal time. Singularity is certainly more 'vivid,' but is it more likely?