Posts

Comments

Comment by ME3 on The Meaning of Right · 2008-07-29T14:59:46.000Z · LW · GW

You know, I think Caledonian is the only one who has the right idea about the nature of what's being written on this blog. I will miss him because I don't have the energy to battle this intellectual vomit every single day. And yet, somehow I am forced to continue looking. Eliezer, how does your metamorality explain the desire to keep watching a trainwreck?

Comment by ME3 on Math is Subjunctively Objective · 2008-07-25T15:15:30.000Z · LW · GW

I think it doesn't make sense to suggest that 2 + 3 = 5 is a belief. It is the result of a set of definitions. As long as we agree on what 2, +, 3, =, and 5 mean, we have to agree on what 2 + 3 = 5 means. I think that if your brain were subject to a neutrino storm and you somehow felt that 2 + 3 = 6, you would still be able to verify that 2 + 3 = 6 by other means, such as counting on your fingers.

I think once you start asking why these things are the way they are, don't you have to start asking why anything exists at all, and what it means for anything to exist? And I'm pretty sure at that point, we are firmly in the province of philosophy and there are no equations to be written, because the existence of the equations themselves is part of the question we're asking.

But I mean, this question has been in my mind since the beginning of the quantum series. I've written a lot of useful software since then, though, without entertaining it much. Do you think maybe it's just better to get on with our lives? It's not a rhetorical question, I really don't know.

Comment by ME3 on Where Recursive Justification Hits Bottom · 2008-07-08T14:20:27.000Z · LW · GW

I read it as saying, "Suppose there is a mind with an anti-Occamian and anti-Laplacian prior. This mind believes that . . ." but of course saying "there is a possible mind in mind design space" is a much stronger statement than that, and I agree that it must be justified. I don't see how such a mind could possibly do anything that we consider mind-like, in practice.

Really, I don't know if this has been mentioned before, but formal systems and the experimental process were developed centuries ago to solve the very problems that you keep talking about (rationality, avoiding self deception, etc). Why do you keep trying to bring us back to 500 BC and the methods of the ancient greeks? Is it because you find actual math too difficult? Trust me, it's still easier to do math right than to do informal reasoning right. On the other hand, it's much more rewarding to do informal reasoning wrong than to do the math wrong. This may be the source of the problem.

Comment by ME3 on Created Already In Motion · 2008-07-01T14:03:35.000Z · LW · GW

Isn't a silicon chip technically a rock?

Also, I take it that this means you don't believe in the whole, "if a program implements consciousness, then it must be conscious while sitting passively on the hard disk" thing. I remember this came up before in the quantum series and it seemed to me absurd, sort of for the reasons you say.

Comment by ME3 on The Moral Void · 2008-06-30T14:31:21.000Z · LW · GW

If everything I do and believe is a consequence of the structure of the universe, then what does it mean to say my morality is/isn't built into the structure of the universe? What's the distinction? As far as I'm concerned, I am (part of) the structure of the universe.

Also, regarding the previous post, what does it mean to say that nothing is right? It's like if you said, "Imagine if I proved to you that nothing is actually yellow. How would you proceed?" It's a bizarre question because yellowness is something that is in the mind anyway. There is simply no fact of the matter as to whether yellowness exists or not.

Comment by ME3 on No Universally Compelling Arguments · 2008-06-26T14:22:13.000Z · LW · GW

Presumably, morals can be derived from game-theoretic arguments about human society just like aerodynamically efficient shapes can be derived from Newtonian mechanics. Presumably, Eliezer's simulated planet of Einsteins would be able to infer everything about the tentacle-creatures' morality simply based on the creatures' biology and evolutionary past. So I think this hypothetical super-AI could in fact figure out what morality humans subscribe to. But of course that morality wouldn't apply to the super-AI, since the super-AI is not human.

Comment by ME3 on The Psychological Unity of Humankind · 2008-06-24T15:09:56.000Z · LW · GW

I agree with the basic points about humans. But if we agree that intelligence is basically a guided search algorithm through design-space, then the interesting part is what guides the algorithm. And it seems like at least some of our emotions are an intrinsic part of this process, e.g. perception of beauty, laziness, patience or lack thereof, etc. In fact, I think that many of the biases discussed on this site are not really bugs but features that ordinarily work so well for the task that we don't notice them unless they give the wrong result (just like optical illusions). In short, I think any guided optimization process will resemble human intelligence in some ways (don't know which ones), for reasons that I explained in my response to the last post.

Which actually makes me think of something interesting: possibly, there is no optimal guided search strategy. The reason why humans appear to succeed at it is because there are many of us thinking about the same thing at any given time, and each of us has a slightly differently tuned algorithm. So, one of is likely to end up converging on the solution even though nobody has an algorithm that can find every solution. And people self-select for types of problems that they're good at.

Comment by ME3 on Optimization and the Singularity · 2008-06-23T14:48:50.000Z · LW · GW

talk as if the simple instruction to "Test ideas by experiment" or the p

I think you're missing something really big here. There is such a thing as an optimal algorithm (or process). The most naive implementation of a process in much worse than the optimal, but infinitely better than nothing. Every successive improvement to the process asymptotically brings us closer to the optimal algorithm, but they can't give you the same order of improvement as the preceding ones. Just because we've gone from O(n^2) to O(n log(n)) in sorting algorithms doesn't mean we'll eventually get to O(1).

Aha! You say. But human brains are so inefficient that actually we haven't even gone a smidgeon of the path to the optimal algorithm and there is a ton more space to go. But computers already overcome many of the inefficiencies of human brains. Our brains do a decent job of pruning the search space up to the near-optimal solution, and computers take care of the work intensive step of going from near-optimal to optimal. And as our software gets better, we have to prune the search space less and less before we give the problem to the computer.

Of course, maybe we still have many orders of magnitude of improvement to go. But you can't just assume that.

Comment by ME3 on Grasping Slippery Things · 2008-06-17T14:43:56.000Z · LW · GW

I think that the "could" idea does not need to be confined to the process of planning future actions.

Suppose we think of the universe as a large state transition matrix, with some states being defined as intervals because of our imperfect knowledge of them. Then, any state in the interval is a "possible state" in the sense that it is consistent with our knowledge of the world, but we have no way to verify that this is in fact the actual state.

Now something that "could" happen corresponds to a state that is reachable from any of the "possible states" using the state transition matrix (in the linear systems sense of reachable). This applies to the world outside ("A meteor could hit me at any moment") or to my internal state ("I could jump off a cliff") in the sense that given my imperfect knowledge of my own state and other factors, the jump-off-a-cliff state is reachable from this fuzzy cloud of states.

Comment by ME3 on Passing the Recursive Buck · 2008-06-16T16:01:50.000Z · LW · GW

In other words, the algorithm is,

explain_box(box) { if(|box.boxes| > 1) print(boxes) else explain_box(box.boxes[0]) }

which works for most real-world concepts, but gets into an infinite loop if the concept is irreducible.

Comment by ME3 on Einstein's Superpowers · 2008-05-30T16:31:34.000Z · LW · GW

There was an article in some magazine not too long ago that most people here have probably read, about how if you tell kids that they did good work because they are smart, they will not try as hard next time, whereas if you tell kids that they did good work because they worked hard, they will try harder and do better. This matches my own experience very well, because for a long time, I had this "smart person" approach to things, where I would try just hard enough to make a little headway, then either dismiss the problem as easy or give up. I see a lot of people falling into this trap, and they almost always are the ones who think they are smart, and who are referred to by others as smart.

I think that maybe it's not about choosing problems, even. I think that it's about walking any given path for long enough that you get to a place where nobody else has been, and that's when you achieve some kind of status in other people's eyes.

Comment by ME3 on Timeless Causality · 2008-05-29T15:00:41.000Z · LW · GW

Isn't causality strictly a map of a world strictly governed by physical laws? If a billiard ball strikes another ball, causing it to move, that is just our way of describing the motions of the balls. And besides, the universe doesn't even split the world up into individual "objects" or "events," so how can causality really exist?

By the way, any physical system is defined not just by its positions, but by its derivatives and second derivatives as well (I believe this is enough to describe the complete state of a system?). So when you talk about frozen states in a timeless universe, they still have to have time derivatives (in our perception of them). In other words, a sequence of still claymation frames and continuous motion may produce the same movie, but they correspond to very different realities.

Comment by ME3 on Timeless Causality · 2008-05-29T14:59:58.000Z · LW · GW

Isn't causality strictly a map of a world strictly governed by physical laws? If a billiard ball strikes another ball, causing it to move, that is just our way of describing the motions of the balls. And besides, the universe doesn't even split the world up into individual "objects" or "events," so how can causality really exist?

By the way, any physical system is defined not just by its positions, but by its derivatives and second derivatives as well (I believe this is enough to describe the complete state of a system?). So when you talk about frozen states in a timeless universe, they still have to have time derivatives (in our perception of them). In other words, a sequence of still claymation frames and continuous motion may produce the same movie, but they correspond to very different realities.

Comment by ME3 on Timeless Beauty · 2008-05-28T15:09:54.000Z · LW · GW

iwdw: there has been some thinking about the universe as an actual game of life, Steven Wolfram's New Kind of Science is the one that comes to mind, but I'm sure there are more reputable sources that he stole the idea from. I believe that this thinking runs into trouble with special relativity.

Speaking of which, has anyone ever attempted to actually model space as a graph of relationships between points, in a computer program? Something like the distance-configuration-space in the last post? It occurs to me that this could actually be a more robust representation for some purposes than just storing the xyz coordinates.

Eliezer: I actually have been getting the insights you speak of repeatedly throughout this series, and it's one of the reasons why I find it helpful to post comments - because it forces me to think through the ideas well enough to get their occasional mind-bendingness. It's also why I have continued reading despite all the what-is-Science business.

But I still think that the subjective time-like-ness of time, as well as the concept of causality, are all caused (ha-ha) by the universe starting out in a low-entropy state. So if you had a toy block universe in your hands, you would still see a direction in the block corresponding to time. There is no way to assign a meaningful distance in that direction for the whole universe because of the locality of physics, but the direction is global, isn't it?

Comment by ME3 on Timeless Physics · 2008-05-27T17:29:43.000Z · LW · GW

But the main thing that's different about time is that it has a clear direction whereas the space dimensions don't. This is caused by the fact that the universe started out in a very low-entropy state, and since then has been evolving into higher entropy. I don't know if it's even possible to answer the question of why the universe started out the way it did -- it's almost like asking why anything exists at all. But whatever the reason, the universe is very uniform in its space dimensions, but very non-uniform in its time dimension.

Comment by ME3 on Timeless Physics · 2008-05-27T15:36:02.000Z · LW · GW

Doesn't the Lorentz invariant already pretty much take care of the relativity of time? As long as we're using the Lorentz invariant, we're free to reparameterize the universe any way we want, and our description will be the same. So I don't see what this Barbour guy is going on about, it seems like standard physics. Whether you write your function f(x,t) or f(y) where y = g(x,t) or even just f(x) where t = h(x) is totally irrelevant to the universe. It's just another coordinate transformation just like translating the whole universe by ten meters to the left.

Now, if you have a new invariant to propose, THAT would amount to an actual change in the laws of physics.

Comment by ME3 on My Childhood Role Model · 2008-05-23T15:48:10.000Z · LW · GW

By the way, when the best introduction to a supposedly academic field is works of science fiction, it sets off alarm bells in my head. I know that some of the best ideas come from sci-fi and yada, yada, but just throwing that out there. I mean, when your response to an AI researcher's disagreement is "Like, duh! Go read some sci-fi and then we'll talk!" who is really in the wrong here?

Comment by ME3 on My Childhood Role Model · 2008-05-23T14:56:20.000Z · LW · GW

Likewise the fact that the human brain must use its full power and concentration, with trillions of synapses firing, to multiply out two three-digit numbers without a paper and pencil.

Some people can do it without much effort at all, and not all of them are autistic, so you can't just say that they've repurposed part of their brain for arithmetic. Furthermore, other people learn to multiply with less effort through tricks. So, I don't think it's really a flaw in our brains, per se.

Comment by ME3 on That Alien Message · 2008-05-22T15:22:40.000Z · LW · GW

Apropos of this, the Eliezer-persuading-his-Jailer-to-let-him-out thing was on reddit yesterday. I read through it and today there's this. Coincidence?

Anyway, I was thinking about the AI Jailer last night, and my thoughts apply to this equally. I am sure Eliezer has thought of this so maybe he has a clear explanation that he can give me: what makes you think there is such a thing as "intelligence" at all? How do we know that what we have is one thing, and not just a bunch of tricks that help us get around in the world?

It seems to me a kind of anthropocentric fallacy, akin to the ancient peoples thinking that the gods were literally giant humans up in the sky. Now we don't believe that anymore but we still think any superior being must essentially be a giant human, mind-wise.

To give an analogy: imagine a world with no wheels (and maybe no atmosphere so no flight either). The only way to move is through leg-based locomotion. We rank humans in running ability, and some other species fit into this ranking also, but would it make sense to then talk about making an "Artificial Runner" that can out-run all of us, and run to the store to buy us milk? And if the AR is really that fast, how will we control it, given that it can outrun the fastest human runners? Will the AR cause the human species to go extinct by outrunning all the males to mate with the females and replace us with its own offspring?

Comment by ME3 on Faster Than Science · 2008-05-20T15:15:04.000Z · LW · GW

I think that I have only now really understood what Eliezer has been getting at with the past ten or so posts, this idea that you could be a scientist if you generated hypotheses using a robot controlled Ouija board. I think other readers have already said this numerous times, but this strikes me as terribly wrong.

First of all, good luck getting research funding for such hypotheses (and it wouldn't be fair to leave out funding from the description of Science if you're including institutional inertia and bias).

And I think we all know that in general, someone who used this method would never be able to get anywhere in academia, simply because they wouldn't be respected.

That, I think, teaches an important lesson. Individual scientists are not required to come up with correct or even plausible hypotheses because we all know that individual rationality is flawed. But the aggregate community of scientists and the people who fund them work together to evaluate the plausibility of a given hypothesis, and thereby effectively carry out the Bayesian analysis that Eliezer speaks of.

So one of many thousands of scientists can propose an utterly harebrained theory, and even spend his life on it if he wants, and it will barely register as a blip on the collective scientific radar. But when SR and GR were proposed, it was pretty much taken as a given that they were true, because they HAD to be true. I read somewhere that the experiment done by Eddington to verify the bending of light around the sun was far from accurate enough to actually be a verification of relativity. But it was still taken as a verification, because everyone was pretty much convinced anyway. And conversely, no matter how many experiments the cold fusion people do that show some unexpected effects, nobody takes them very seriously.

Now, you might say that this system is horribly inefficient, and many people say this on a regular basis. But here, the problem is simply that no individual human being can process that much information, and so the time it takes for a given data point to propagate through the community is very long. Of course, the internet helps, and if scientific journals were free, that would probably help also. But ultimately, I think this inefficiency is precisely the cost of a network evaluating all of the priors to find out the plausibility of a theory.

Of course, it also reduces a scientist to nothing more than a cog in a machine, and many people who want to be heroic can't deal with that. But in real life, no scientist is expected to evaluate his own hypothesis. They are expected to come up with a hypothesis, and try to verify it if they can get funding, and let the community decide to what extent the results are valid.

Comment by ME3 on Do Scientists Already Know This Stuff? · 2008-05-17T03:55:46.000Z · LW · GW

First, I think this can be said for any field: the textbooks don't tell you what you really need to know, because what you really need to know is a state of mind that you can only arrive at on your own.

And there are many scientists who do in fact spend time puzzling over how to distinguish good hypotheses from bad. Some don't, and they spend their days predicting what the future will be like in 2050. But they need not concern us, because they are just examples of people who are bad at what they do.

There is this famous essay: http://www.quackwatch.com/01QuackeryRelatedTopics/signs.html

And also this one: http://wwwcdf.pd.infn.it/~loreti/science.html

Comment by ME3 on Science Isn't Strict Enough · 2008-05-16T20:33:28.000Z · LW · GW

P(A&B)<=P(A), P(A|B)>=P(A)

Isn't this just ordinary logic? It doesn't really require all of probability theory. I believe that logic is a fairly uncontroversial element of scientific thought, though of course occasionally misapplied.

Comment by ME3 on Science Isn't Strict Enough · 2008-05-16T14:00:17.000Z · LW · GW

Similarly, if the Bayesian answer is difficult to compute, that doesn't mean that Bayes is inapplicable; it means you don't know what the Bayesian answer is.

So then what good is this Bayes stuff to us exactly, us of the world where the vast majority of things can't be computed?

Comment by ME3 on When Science Can't Help · 2008-05-15T16:48:16.000Z · LW · GW

Nick: Not any more ridiculous than throwing out an old computer or an old car or whatever else. If we dispense with the concept of a soul, then there is really no such thing as death, but just states of activity and inactivity for a particular brain. So if you accept that you are going to be inactive for probably decades, then what makes you think you're going to be worth reactivating?

Comment by ME3 on When Science Can't Help · 2008-05-15T15:19:00.000Z · LW · GW

If you accept that there is no "soul" and your entire consciousness exists only in the physical arrangement of your brain (I more or less believe this), then it would be the height of egotism to require someone to actively preserve your particular brain pattern for an unknown number of years until your body can be reactivated. Simply because better ones are sure to come along in the meantime.

I mean, think about your 70-year-old uncle with his outdated ways of thinking and generally eccentric behavior -- now think of a freezer full of 700-year-old uncles who want to be unfrozen as soon as the technology exists, just so they can continue making obnoxious forum posts about how they're smarter than all scientists on earth. Would you want to unfreeze them, except maybe as historical curiosities?

Comment by ME3 on The Dilemma: Science or Bayes? · 2008-05-13T15:05:02.000Z · LW · GW

I also think you are taking the MWI vs. Copenhagen too literally. The reason why they are called interpretations is that they don't literally say anything about the actual underlying wave function. Perhaps, as Goofus in your earlier posts, some physicists have gotten confused and started to think of the interpretations as reality. But the idea that the wave function "collapses" only makes sense as a metaphor to help us understand its behavior. That is all that a theory that makes no predictions can be -- a metaphor.

MWI and Copenhagen are different perspectives on the same process. Copenhagen looks at the past behavior of the wave function from the present, and in such cases the wave function behaves AS IF it had previously collapsed. MWI looks at the future behavior of the wave function, where it behaves AS IF it is going to branch. If you look at it that way, the simplest explanation depends on what you are describing: if you are trying to talk about YOUR past history in the wave function, you have no choice but to add in information about each individual branch that was taken from t_0 to t, but if you are talking about the future in general, it is simplest to just include ALL the possible branches.

Comment by ME3 on The Failures of Eld Science · 2008-05-12T16:20:04.000Z · LW · GW

Seriously, agreeing with Caledonian.

I remember Eliezer wrote an earlier essay to the effect that GR is a really simple theory, in some information-theoretic sense, and therefore we should optimize our theories based on their information-theoretic complexity. But what's being missed here is that GR (and SR and Newtonian physics and arithmetic . . .) are simple stated on its own terms. That's WHY it's a paradigm shift. If you tried to state GR strictly as a modification of Newtonian mechanics in a global coordinate system, you would either fail, or you would end up with something incredibly complex that would appear implausible by information-theoretic counts.

The bits that you fail to count, when looking at a simple theory, are the bits required to represent the entire worldview, which don't seem like they're information because they're just how you look at the world.

What you're trying to do is find a local optimization in theory-space, but all you're working with is a projection of theory-space onto the sub-space that is our current way of thinking, and then you find your objective function is not quite zero, but you wave your hands and say, "Hey! It's lower than what we had before! Why did it take people 30 years to reach this not-quite-minimum when all they had to do was descend the gradient?" I think a lot of people would rather just wait around for someone to come along with an answer that really does minimize the objective function.

Somehow you have to hit upon the right projection of theory-space that happens to include all the right variables. If you have a mistress, I invite you to retire to a cottage with her for a month and see if that helps.

Comment by ME3 on Quantum Non-Realism · 2008-05-08T15:45:57.000Z · LW · GW

1) Can someone tell me to what extent this many-worlds interpretation is really accepted? I mean, nobody told me the news that the collapse interpretation was no longer accepted, and I think I read such things in a recent physics textbook. So, can physicists remark on their experience?

2) I think the notion that the QM equations don't mean anything refers to the fact that nobody knows what the real substrate is in which QM takes place. It's a bit analogous to the pre-QM situation with light. People asked, what does light travel in? But since nobody was able to identify any substrate for light, they had to treat the wave-like nature of light as simply an empty metaphor. At least, that's how the classical theory of light was taught to me.

So in the same way, you say that the amplitudes and configurations are the "reality." But where do the configurations "exist"? Unless you believe that the universe is being simulated in a computer (which seems like a highly unparsimonious not to mention anthropocentric assumption), the equations must be a model of something that's out there. But it doesn't seem like we really know anything that the equations are models of.

Comment by ME3 on The Born Probabilities · 2008-05-01T15:13:35.000Z · LW · GW

As I understand it (someone correct me if I'm wrong), there are two problems with the Born rule: 1) It is non-linear, which suggests that it's not fundamental, since other fundamental laws seem to be linear

2) From my reading of Robin's article, I gather that the problem with the many-worlds interpretation is: let's say a world is created for each possible outcome (countable or uncountable). In that case, the vast majority of worlds should end up away from the peaks of the distribution, just because the peaks only occupy a small part of any distribution.

Robin's solution seems to me equivalent to the Quantum Spaghetti Monster eating the unlikely worlds that we find ourselves not to end up in. The key line is "sudden and thermodynamically irreversible." Actually, that should be enough to bury the theory since aren't fundamental physical laws thermodynamically neutral?

We could probably eliminate this distraction of consciousness, couldn't we? I mean, let's say that Mathematica version 5000 comes out in a few centuries and in addition to its other symbolic algebra capabilities, it comes with a physical-law-prover: you ask it questions and it sets up experiments to answer those questions. So you ask it about quantum mechanics, it does a bunch of double-slit-experiments in a robotic lab, and gives you the answer, which includes the Born rule. Consciousness was never involved.

Actually it seems to me like this whole business of quantum probabilities is way overrated (for the non-physicist), because it only really manifests itself in cleverly constructed experiments . . . right? I mean, setting aside exactly how Born's rule derives from the underlying physics, is there any reason to believe that we would learn anything new by finding out?

Comment by ME3 on The Conscious Sorites Paradox · 2008-04-29T15:29:56.000Z · LW · GW

mitchell: As the Buddhists pointed out a long time ago, the flow of time is actually an illusion. All that you actually experience at any given moment is your present sensory input, plus the memories of the past. But there are any number of experiences involving loss of consciousness that will show that the flow of time as we perceive it is completely subjective (not to say that there is no time "out there," just that we don't directly perceive it).

So while I agree that "something is happening," it does not necessarily consist of one thing after another. Really it's just another formulation of cogito ergo sum.

This is also relevant in response to Caledonian - the brain does not have to live for any sustained period of time. A Boltzmann brain can pop into existence fully oxygenated with the memories that it is me, typing this response, think about it for a few seconds, and then die of whatever brains die of in interstellar space. From inside the brain, there would be no way to know the difference.

Eliezer: Isn't it sufficient to say that your brain has an expectation of order because that is how it's evolved? And what would a brain with no expectation of order even look like? Is it meaningful to talk about a control system that has no model of the outside world?

Comment by ME3 on Evaporative Cooling of Group Beliefs · 2007-12-10T23:33:15.000Z · LW · GW

Eliezer, you are right, what I really meant to say was, once a person finds a locally optimal solution using whatever algorithm, they then have a threshold for changing their mind, and it is that threshold that is similar to temperature.

Comment by ME3 on Evaporative Cooling of Group Beliefs · 2007-12-10T16:25:03.000Z · LW · GW

The metaphor can be made mathematically precise if we first make the analogy between human decision-making and optimization methods like simulated annealing and genetic algorithms. These optimization methods look for a locally optimal solution, but add some sort of "noise" term to try to find a globally optimal solution. So if we suppose that someone who wants to stay in his own local minimum has a lower "noise" temperature than someone who is open-minded, then the metaphor starts to make sense on a much more profound level.

Comment by ME3 on The Halo Effect · 2007-11-30T15:55:25.000Z · LW · GW

I am also struck by the correlation-vs.-causation issue in the canadian voters study. Moreover, how do we know that the attractiveness rating isn't actually a reflection of the qualities the voters claim to be looking for? I.e. a more confident, intelligent, eloquent candidate would probably appear more attractive than one who isn't, all other things being equal.