Posts

Comments

Comment by earthwormchuck163 on Gauging interest for a Tokyo area meetup group · 2014-08-05T10:32:43.417Z · LW · GW

I am on vacation in Japan until the end of August, I might be interested in attending a meetup. Judging from the lack of comments here this never took off, but I might as well leave this here just in case.

Comment by earthwormchuck163 on Open Thread March 31 - April 7 2014 · 2014-04-03T01:49:04.737Z · LW · GW

I probably have obstructive sleep apnea. I exhibit a symptoms (ie feeling sleepy despite getting normal or above average amounts of sleep, dry mouth when I wake up) and also I just had a sleep specialist tell me that the geometry of my mouth and sinuses makes puts me at high risk. I got an appointment for a sleep study a month from now. Based on what I've read, this means that it will probably take at least two months or more before I can start using a CPAP machine if I go through the standard procedure. This seems like an insane amount of time to wait for something that probably has a good chance of significantly improving my quality of life immediately. Is there any good reason why I can't just buy a CPAP machine and start using it?

Comment by earthwormchuck163 on Business Insider: "They Finally Tested The 'Prisoner's Dilemma' On Actual Prisoners — And The Results Were Not What You Would Expect" · 2013-07-24T13:57:36.890Z · LW · GW

So if you want the other party to cooperate, should you attempt to give that party the impression it has been relatively unsuccessful, at least if that party is human?

I don't think so. It seems more likely to me that the common factor between increased defection rate and self-perceived success is more consequentialist thinking. This leads to perceived success via actual success, and to defection via thinking "defection is the dominant strategy, so I'll do that".

Comment by earthwormchuck163 on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-08T07:17:20.013Z · LW · GW

After thinking about it a bit more I decided that I actually do care about simulated people almost exactly as the mugger thought I did.

Comment by earthwormchuck163 on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-06T08:17:45.995Z · LW · GW

Mugger: Give me five dollars, and I'll save 3↑↑↑3 lives using my Matrix Powers.

Me: I'm not sure about that.

Mugger: So then, you think the probability I'm telling the truth is on the order of 1/3↑↑↑3?

Me: Actually no. I'm just not sure I care as much about your 3↑↑↑3 simulated people as much as you think I do.

Mugger: "This should be good."

Me: There's only something like n=10^10 neurons in a human brain, and the number of possible states of a human brain exponential in n. This is stupidly tiny compared to 3↑↑↑3, so most of the lives you're saving will be heavily duplicated. I'm not really sure that I care about duplicates that much.

Mugger: Well I didn't say they would all be humans. Haven't you read enough Sci-Fi to know that you should care about all possible sentient life?

Me: Of course. But the same sort of reasoning implies that, either there are a lot of duplicates, or else most of the people you are talking about are incomprehensibly large, since there aren't that many small Turing machines to go around. And it's not at all obvious to me that you can describe arbitrarily large minds whose existence I should care about without using up a lot of complexity. More generally, I can't see any way to describe worlds which I care about to a degree that vastly outgrows their complexity. My values are complicated.

Comment by earthwormchuck163 on A Series of Increasingly Perverse and Destructive Games · 2013-02-15T08:35:39.923Z · LW · GW

How does your proposed solution for Game 1 stack up against the brute-force metastrategy?

Well the brute force strategy is going to do a lot better, because it's pretty easy to come up with a number bigger than the length of the longest program anyone has ever thought to write, and then plugging that into your brute force strategy automatically beats any specific program that anyone has ever thought to write. On the other hand, the meta-strategy isn't actually computable (you need to be able to decide whether program produces large outputs, which requires a halting oracle or at least a way of coming up with large stopping times to test against). So it doesn't really make sense to compare them.

Comment by earthwormchuck163 on A Series of Increasingly Perverse and Destructive Games · 2013-02-15T08:25:51.370Z · LW · GW

I think I can win Game 1 against almost anyone - in other words, I think I have a larger computable number than any sort of computable number I've seen anyone describe in these sorts of contests, where the top entries typically use the fast-growing hierarchy for large recursive ordinals, in contests where Busy Beaver and beyond aren't allowed.

Okay I have to ask. Care to provide a brief description? You can assume familiarity with all the standard tricks if that helps.

Comment by earthwormchuck163 on A Series of Increasingly Perverse and Destructive Games · 2013-02-14T18:43:41.149Z · LW · GW

In short, take it as a given that anyone, on any level, has a halting oracle for arbitrary programs, subprograms, and metaprograms, and that non-returning programs are treated as producing no output.

In this case, I have no desire to escape from the room.

Comment by earthwormchuck163 on Welcome to Less Wrong! (July 2012) · 2013-02-09T04:22:48.412Z · LW · GW

Why not stay around and try to help fix the problem?

Comment by earthwormchuck163 on Simulating Problems · 2013-01-31T04:15:29.818Z · LW · GW

I did read that. It either doesn't say anything at all, or else it trivializes the problem when you unpack it.

Also, this is not worth my time. I'm out.

Comment by earthwormchuck163 on Simulating Problems · 2013-01-31T03:37:07.634Z · LW · GW

Your question is not stated in anything like the standard terminology of game theory and decision theory. It's also not clear what you are asking on an informal level. What do you mean by "analogous"?

Comment by earthwormchuck163 on Simulating Problems · 2013-01-31T02:42:18.139Z · LW · GW

I'll give you a second data point to consider. I am a soon-to-be-graduated pure math undergraduate. I have no idea what you are asking, beyond very vague guesses. Nothing in your post or the proceeding discussion is of a "rather mathematical nature", let alone a precise specification of a mathematical problem.

If you think that you are communicating clearly, then you are wrong. Try again.

Comment by earthwormchuck163 on My simple hack for increased alertness and improved cognitive functioning: very bright light · 2013-01-18T16:39:53.479Z · LW · GW

Oh wow this is so obvious in hindsight. Trying this asap thank you.

Comment by earthwormchuck163 on Rationality Quotes January 2013 · 2013-01-11T22:03:38.611Z · LW · GW

That line always bugged me, even when I was a little kid. It seems obviously false (especially in the in-game context).

I don't understand why this is a rationality quote at all; Am I missing something, or is it just because of the superficial similarity to some of EY's quotes about apathetic uFAIs?

Comment by earthwormchuck163 on Morality is Awesome · 2013-01-11T20:41:05.352Z · LW · GW

One time my roommate ate shrooms, and then he spent about 2 hours repeatedly knocking over an orange juice jug, and then picking it up again. It was bizarre. He said "this is the best thing ever" and was pretty sincere. It looked pretty silly from the outside though.

Comment by earthwormchuck163 on Standard and Nonstandard Numbers · 2013-01-11T02:22:41.713Z · LW · GW

This is largely a matter of keeping track of the distinction between "first order logic: the mathematical construct" and "first order logic: the form of reasoning I sometimes use when thinking about math". The former is an idealized model of the latter, but they are distinct and belong in distinct mental buckets.

It may help to write a proof checker for first order logic. Or alternatively, if you are able to read higher math, study some mathematical logic/model theory.

Comment by earthwormchuck163 on Course recommendations for Friendliness researchers · 2013-01-10T08:18:19.843Z · LW · GW

I have personally witnessed great minds acting very stupid because of it.

I'm curious. Can you give a specific example?

Comment by earthwormchuck163 on Course recommendations for Friendliness researchers · 2013-01-10T08:15:48.989Z · LW · GW

Note that this actually has very little to do with most of the seemingly hard parts of FAI theory. Much of it would be just as important if we wanted to create a recursively self modifying paper-clip maximizer, and be sure that it wouldn't accidentally end up with the goal of "do the right thing".

The actual implementation is probably far enough away that these issues aren't even on the radar screen yet.

Comment by earthwormchuck163 on Standard and Nonstandard Numbers · 2013-01-10T07:13:32.144Z · LW · GW

Sorry I didn't answer this before; I didn't see it. To the extent that the analogy applies, you should think of non-standard numbers and standard numbers as having the same type. Specifically, the type of things that are being quantified over in whatever first order logic you are using. And you're right that you can't prove that statement in first order logic; Worse, you can't even say it in first order logic (see the next post, on Godel's theorems and Compactness/Lowenheim Skolem for why).

Comment by earthwormchuck163 on Course recommendations for Friendliness researchers · 2013-01-10T00:19:51.043Z · LW · GW

I am well versed in most of this math, and a fair portion of the CS (mostly the more theoretical parts, not so much the applied bits). Should I contact you now, or should I study the rest of that stuff first?

In any case, this post has caused me to update significantly in the direction of "I should go into FAI research". Thanks.

Comment by earthwormchuck163 on Rationality Quotes January 2013 · 2013-01-08T22:52:12.231Z · LW · GW

Also, Kyubey clearly has pretty drastically different values from people, and thus his notion of saving the universe is probably not quite right for us.

Comment by earthwormchuck163 on Second-Order Logic: The Controversy · 2013-01-05T21:50:42.191Z · LW · GW

If you take this objection seriously, then you should also take issue with predictions like "nobody will ever transmit information faster than the speed of light", or things like it. After all, you can never actually observe the laws of physics to have been stable and universal for all time.

If nothing else, you can consider each as being a compact specification of an infinite sequence of testable predictions: "doesn't halt after one step", "doesn't halt after two steps",... "doesn't halt after n steps".

Comment by earthwormchuck163 on Second-Order Logic: The Controversy · 2013-01-05T09:23:55.137Z · LW · GW

I don't think ZFC can prove the consistency of ZF either, but I'm not a set theorist.

Also not a set theorist, but I'm pretty sure this is correct. ZF+Con(ZF) proves Con(ZFC) (see http://en.wikipedia.org/wiki/Constructible_universe), so if ZFC could prove Con(ZF) then it would also prove Con(ZFC).

Comment by earthwormchuck163 on Standard and Nonstandard Numbers · 2012-12-20T23:51:34.440Z · LW · GW

< is defined in terms of plus by saying x<y iff there exists a such that y=z+x. + is supposed to be provided as a primitive operation as part of the data consisting of a model of PA. It's not actually possible to give a concrete description of what + looks like in general for non-standard models because of Tenenbaums's Theorem, but at least when one of x or y (say x) is a standard number it's exactly what you'd expect: x+y is what you get by starting at y and going x steps to the right.

To see that x<y whenever x is a standard number and y isn't, you need to be a little tricky. You actually prove an infinite family of statements. The first one is "for all x, either x=0 or else x>0". The second is "for all x, either x=0 or x=1 or x>1", and in general it's "for all x, either x=0,1,..., or n, or else x>n". Each of these can be proven by induction, and the entire infinite family together implies that a non-standard number is bigger than every standard number.

Comment by earthwormchuck163 on Standard and Nonstandard Numbers · 2012-12-20T23:39:43.424Z · LW · GW

It's probably worth explicitly mentioning that the structure that you described isn't actually a model of PA. I'd imagine that could otherwise be confusing for readers who have never seen this stuff before and are clever enough to notice the issue.

Comment by earthwormchuck163 on Standard and Nonstandard Numbers · 2012-12-20T22:30:50.329Z · LW · GW

Okay. This is exactly what I thought it should be, but the way Eliezer phrased things made me wonder if I was missing something. Thanks for clarifying.

Comment by earthwormchuck163 on Standard and Nonstandard Numbers · 2012-12-20T19:05:46.506Z · LW · GW

you can write a formula in the language of arithmetic saying "the Turing machine m halts on input i"

You get a formula which is true of the standard numbers m and i if and only if the m'th Turing machine halts on input i. Is there really any meaningful sense in which this formula is still talking about Turing machines when you substitute elements of some non-standard model?

Comment by earthwormchuck163 on Standard and Nonstandard Numbers · 2012-12-20T18:55:53.456Z · LW · GW

Nice post, but I think you got something wrong. Your structure with a single two-sided infinite chain isn't actually a model of first order PA. If x is an element of the two-sided chain, then y=2x=x+x is another non-standard number, and y necessarily lies in a different chain since y-x=x is a non-standard number. Of course, you need to be a little bit careful to be sure that this argument can be expressed in first order language, but I'm pretty sure it can. So, as soon as there is one chain of non-standard numbers, that forces the existence of infinitely many.

Comment by earthwormchuck163 on Harry Potter and the Methods of Rationality discussion thread, part 17, chapter 86 · 2012-12-18T02:28:03.543Z · LW · GW

I bet a large portion of the readership would have been disappointed if that didn't happen.

And in this particular case, that was the only fast way for Alastor to gain enough respect for Harry's competence that they could cooperate in the future. It wouldn't have been consistent with his already established paranoia if he just believed Dumbledore & co.

I can imagine this getting old eventually, but imo it hasn't happened yet.

Comment by earthwormchuck163 on Causal Diagrams and Causal Models · 2012-10-12T22:15:21.378Z · LW · GW

Were you trying to diet at the same time? Have you ever tried exercising more without also restricting your food intake?

Also, have you ever enjoyed exercising while doing it?

Edit: Just to be clear, this isn't supposed to be advice, implicit or otherwise. I'm just curious.

Comment by earthwormchuck163 on The Fabric of Real Things · 2012-10-12T22:12:03.880Z · LW · GW

You can account for a theory where neurons cause consciousness, and where consciousness has no further effects, by drawing a causal graph like

(universe)->(consciousness)

where each bracketed bit is short-hand for a possibly large number of nodes, and likewise the -> is short for a possibly large number of arrows, and then you can certainly trace forward along causal links from "you" to "consciousness", so it's meaningful. And indeed for the same reason that "the ship doesn't disappear when it crosses the horizon" is meaningful.

We reject epiphenomenal theories of consiousness because the causal model without the (consciousness) subgraph is more simpler than the one with it, and we have no evidence for the latter to overcome this prior improbability. This is of course the exact same reason why we accept that the ship still exists after it crosses the horizon.

Comment by earthwormchuck163 on The Fabric of Real Things · 2012-10-12T21:42:28.876Z · LW · GW

In a universe without causal structure, I would expect an intelligent agent that uses an internal causal model of the universe to never work.

Of course you can't really have an intelligent agent with an internal causal model in a universe with no causal structure, so this might seem like a vacuous claim. But it still has the consequence that

P(intelligence is possible|causal universe)>P(intelligence|acausal universe).

Comment by earthwormchuck163 on The Fabric of Real Things · 2012-10-12T20:59:20.390Z · LW · GW

My cousin is psychic - if you draw a card from his deck of cards, he can tell you the name of your card before he >looks at it. There's no mechanism for it - it's not a causal thing that scientists could study - he just does it.

I believe that your cousin can, under the right circumstances, reliably guess which card you picked. There are all sorts of card tricks that let one do exactly that if the setup is right. But I confidently predict that his guess is not causally separated from the choice of card.

To turn this into a concrete empirical prediction: Suppose that I have your cousin repeat his trick several times (say 100) for me, and each time I eliminate any causal interaction between the the deck and your cousin that I can think of. Then I predict with probability ~1 that, within several rounds, your cousin will not be able to guess which card I draw.

If this prediction turns out to be wrong, and your cousin can still reliably guess which card I drew even when we are in lead boxes on opposite sides of the planet, then I will gladly discard my model of reality, since it will have predicted a true state of affairs to be impossible.

Even this wouldn't be enough to prove your cousin's power is acausal, since there are conceivable causal influences that I couldn't eliminate easily.For example, maybe we live in a computer simulation that runs standard physics all the time, except for a high-level override which configures your cousin's visual cortex to a see a picture of every card that anyone draws from a deck. But I would certainly require evidence at least that strong before even considering the possibility that your cousin can guess cards acausally.

it's not a causal thing that scientists could study

A scientist is just a person who learns about things by looking at them (ie causally interacting with them), plus some social conventions to make up for human fallibility. Do you claim that the social conventions of science stop one from learning about this thing by looking at it, or that one cannot learn about this thing by looking at it at all. If the former, then again I confidently predict that you are wrong. If the latter, then I can only ask how you learned about the thing, if not by looking at it.

Same thing when I commune on a deep level with the entire universe in order to realize that my partner truly loves >me.

You mean to say that your confidence that your partner loves you is not a result of your direct interactions with em? How terrible!

Comment by earthwormchuck163 on The Useful Idea of Truth · 2012-10-02T20:50:18.565Z · LW · GW

The pictures are a nice touch.

Though I found it sort of unnerving to read a paragraph and then scroll down to see a cartoon version of the exact same image I had painted inside my head, several times in a row.

Comment by earthwormchuck163 on The Useful Idea of Truth · 2012-10-02T20:22:34.379Z · LW · GW

I would like to thank you for bringing my attention to that sentence without any context.

Comment by earthwormchuck163 on The Nanny State Didn't Show Up, You Hired It [LINK] · 2012-09-18T22:38:25.874Z · LW · GW

Thanks for reminding me that I should be following this guy.

Comment by earthwormchuck163 on Rationality Quotes August 2012 · 2012-08-23T01:05:56.278Z · LW · GW

No, but mistaking your approximation for the thing you are approximating is.

Comment by earthwormchuck163 on Rationality Quotes August 2012 · 2012-08-22T21:23:20.385Z · LW · GW

I thought that was what I just said...

Comment by earthwormchuck163 on Rationality Quotes August 2012 · 2012-08-22T03:35:16.715Z · LW · GW

Instrumental values are just subgoals that appear when you form plans to achieve your terminal values. They aren't supposed to be reflected in your utility function. That is a type error plain and simple.

Comment by earthwormchuck163 on From the "weird math questions" department... · 2012-08-10T03:16:28.296Z · LW · GW

Okay, I concede. I recognize when I've been diagonalized.

Comment by earthwormchuck163 on From the "weird math questions" department... · 2012-08-10T02:18:49.945Z · LW · GW

Did you mean to write "for all programs that halt in less than (some constant multiple of) N steps", because what you wrote doesn't make sense.

Yes. Edited.

What if I give you a program that enumerates all proofs under PA and halts if it ever finds proof for a contradiction? There is no proof under PA that this program doesn't halt, so your fake oracle will return HALT, and then I will have reasonable belief that your oracle is fake.

That's cool. Can you do something similar if I change my program to output NOT-HALT when it doesn't find a proof?

Comment by earthwormchuck163 on From the "weird math questions" department... · 2012-08-09T23:14:13.189Z · LW · GW

If I am allowed to use only exponentially more computing power than you (are far cry from a halting oracle), then I can produce outputs that you cannot distinguish from a halting oracle.

Consider the following program: Take some program P as input, and search over all proofs of length at most N, in some formal system that can describe the behaviour of arbitrary programs (ie first order PA) for a proof that P either does or does not halt. If you find a proof one way or the other, return that answer. Otherwise, return HALT.

This will return the correct answer for all programs of which halt in less than (some constant multiple of) N, since actually running the program until it halts provides a proof of halting. But it also gives the correct answer for a lot of other cases: for example there is a very short proof that "While true print 1".

Now, if I am allowed exponentially more computing power than you, then I can run this program with N equal to the number of computations that you are allowed to expend. In particular, any program that you query me on, I will either answer correctly, or give a false answer that you won't be able to call me out on.

The Kolmogorov complexity of an uncomputable sequence is infinite, so Solomonoff induction assigns it a probability of zero, but there's always a computable number with less than epsilon error, so would this ever actually matter?

Can you re-phrase this please? I don't understand what you are asking.

Comment by earthwormchuck163 on What useful skills can be learned in three months? · 2012-05-25T08:03:31.680Z · LW · GW

Two suggestions, sort of on opposite ends of the spectrum.

First: Practice doing "contest style" math problems. This helps your general math skills, and also helps get you used to thinking creatively and learning to gain some confidence in exploring your good ideas to their limit, while also encouraging you to quickly relinquish lousy approaches.

Second: Exercise. A lot. Whether or not you're already in good shape, you will almost inevitably find it hard to keep a healthy exercise routine when starting in college. So start building some good habits right away.

Comment by earthwormchuck163 on Imposing FAI · 2012-05-17T22:37:03.875Z · LW · GW

Ahh, that makes a lot more sense.

Comment by earthwormchuck163 on Imposing FAI · 2012-05-17T21:52:32.952Z · LW · GW

The standard answer is there is such a strong "first mover advantage" for self-improving AIs that it only matters which comes first: If an FAI comes first, it would be enough to stop the creation of uFAI's (and also vice versa). This is addressed at some length in Eliezer's paper Artificial Intelligence as a Positive and Negative Factor in Global Risk.

I don't find this answer totally satisfying. It seems like an awfully detailed prediction to make in absence of a technical theory of AGI.

Comment by earthwormchuck163 on Jason Silva on AI safety · 2012-05-09T23:02:59.210Z · LW · GW

No need to apologize. It's clear in hindsight that I made a poor choice of words.

Comment by earthwormchuck163 on Jason Silva on AI safety · 2012-05-09T21:56:09.558Z · LW · GW

I'm not saying the apparent object level claim (ie intelligence implies benevolence) is wrong. Just that it does in fact require further examination. Whereas here it looks like an invisible background assumption.

Did my phrasing not make it clear that this is what I meant, or did you interpret me as I intended and still think it sounds condescending?

Comment by earthwormchuck163 on Jason Silva on AI safety · 2012-05-09T20:25:15.589Z · LW · GW

Because it's so obvious that it doesn't require further examination. (Of course this is wrong and it does, but he hasn't figured that out yet.)

Comment by earthwormchuck163 on The Quick Bayes Table · 2012-04-19T02:08:39.687Z · LW · GW

I am putting a printout of this chart on my desk until I have it internalized. No more fumbling around trying to do numerical updates in conversation in real time.

Comment by earthwormchuck163 on March 2012 Media Thread · 2012-03-02T00:49:29.902Z · LW · GW

I'm probably going to watch this this weekend. Looks pretty fun.