Posts

Comments

Comment by Yorick_Newsome on Open Thread: December 2009 · 2009-12-08T04:39:32.031Z · LW · GW

I spent much of my childhood obsessing over symmetry. At one point I wanted to be a millionaire solely so I could buy a mansion, because I had never seen a symmetrical suburban house.

Comment by Yorick_Newsome on Open Thread: December 2009 · 2009-12-02T12:57:34.741Z · LW · GW

I'll replace it without the spacing so it's more compact. Sorry about that, I'll work on my comment etiquette.

Comment by Yorick_Newsome on Call for new SIAI Visiting Fellows, on a rolling basis · 2009-12-02T11:25:56.555Z · LW · GW

Maybe I'm wrong, but it seems most people here follow the decision theory discussions just for fun. Until introduced, we just didn't know it was so interesting! That's my take anyways.

Comment by Yorick_Newsome on Open Thread: December 2009 · 2009-12-02T11:06:32.921Z · LW · GW

Big Edit: Jack formulated my ideas better, so see his comment.
This was the original: The fact that the universe hasn't been noticeably paperclipped has got to be evidence for a) the unlikelihood of superintelligences, b) quantum immortality, c) our universe being the result of a non-obvious paperclipping (the theists were right after all, and the fine-tuned universe argument is valid), d) the non-existence of intelligent aliens, or e) that superintelligences tend not to optimize things that are astronomically visible (related to c). Which of these scenarios is most likely? Related question: If we built a superintelligence without worrying about friendliness or morality at all, what kind of things would it optimize? Can we even make a guess? Would it be satisfied to be a dormant Laplace's Demon?

Comment by Yorick_Newsome on Call for new SIAI Visiting Fellows, on a rolling basis · 2009-12-02T06:40:50.228Z · LW · GW

I had a dream where some friends and I invaded the "Less Wrong Library", and I agree it was most impressive. ...in my dream.

Comment by Yorick_Newsome on Rationality Quotes November 2009 · 2009-12-01T14:10:16.183Z · LW · GW

^ Yossarian, a character in the novel Catch 22, by Joseph Heller.

Comment by Yorick_Newsome on The Moral Status of Independent Identical Copies · 2009-12-01T13:17:40.283Z · LW · GW

I am probably in way over my head here, but...

The closest thing to teleportation I can imagine is uploading my mind and sending the information to my intended destination at lightspeed. I wouldn't mind if once the information was copied the teleporter deleted the old copy. If instead of 1 copy, the teleporter made 50 redundant copies just in case, and destroyed 49 once it was confirmed the teleportation was successful, would that be like killing me 49 times? Are 50 copies of the same mind being tortured any different than 1 mind being tortured? I do not think so. It is just redundant information, there is no real difference in experience. Thus, in my mind, only 1 of the 50 minds matter (or the 50 minds are essentially 1 mind). The degree to which the other 49 matter is only equal to the difference in information they encode. (Of course, a superintelligence would see about as much relative difference in information between humans as we humans see in ants; but we must take an anthropocentric view of state complexity.)

The me in other quantum branches can be very, very similar to the me in this one. I don't mind dying in one quantum branch all that much if the me not-dying in other quantum branches is very similar to the me that is dying. The reason I would like there to be more mes in more quantum branches is because other people care about the mes. That is why I wouldn't play quantum immortality games (along with the standard argument that in the vast majority of worlds you would end up horribly maimed.)

If the additional identical copies count for something, despite my intuitions, at the very least I don't think their value should aggregate linearly. I would hazard a guess that a utility function which does that has something wrong with it. If you had 9 identical copies of Bob and 1 copy of Alice, and you had to kill off 8 copies, there must be some terminal value for complexity that keeps you from randomly selecting 8, and instead automatically decides to kill off 8 Bobs (given that Alice isn't a serial killer, utility of Alice and Bob being equal, yada yada yada.)

I think that maybe instead of minds it would be easier and less intuition-fooling to think about information. I also think that, like I said, I am probably missing the point of the post.

Comment by Yorick_Newsome on Call for new SIAI Visiting Fellows, on a rolling basis · 2009-12-01T06:39:15.910Z · LW · GW

I'm slowly waking up to the fact that people at the Singularity Institute as well as Less Wrong are dealing with existential risk as a Real Problem, not just a theoretical idea to play with in an academic way. I've read many essays and watched many videos, but the seriousness just never really hit my brain. For some reason I had never realized that people were actually working on these problems.

I'm an 18 year old recent high school dropout, about to nab my GED. I could go to community college, or I could go along with my plan of leading a simple life working a simple job, which I would be content doing. I'm a sort of tabla rossa here: if I wanted to get into the position where I would be of use to the SIAI, what skills should I develop? Which of the 'What we're looking for' traits would be most useful in a few years? (The only thing I'm good at right now is reading very quickly and retaining large amounts of information about various fields: but I rarely understand the math, which is currently very limiting.)

Comment by Yorick_Newsome on Rationality Quotes November 2009 · 2009-12-01T01:27:56.002Z · LW · GW

Point taken, I just think that it's normally not good. I also think that maybe, for instance, libertarians and liberals have different conceptions of selfishness that lead the former to go 'yay, selfishness!' and the latter to go 'boo, selfishness!'. Are they talking about the same thing? Are we talking about the same thing? In my personal experience, selfishness has always been demanding half of the pie when fairness is one-third, leading to conflict and bad experiences that could have been avoided. We might just have different conceptions of selfishness.

Comment by Yorick_Newsome on Rationality Quotes November 2009 · 2009-11-30T06:34:12.749Z · LW · GW

I liked this comment, but as anonym points out far below, the original blog post is really talking about "pre-scientific and scientific ways of investigating and understanding the world." - anonym. So 'just a few centuries ago' might not be very accurate in the context of the post. The author's fault, not yours; but just sayin'.

Comment by Yorick_Newsome on Rationality Quotes November 2009 · 2009-11-30T06:24:09.083Z · LW · GW

Oh jeeze, how did I miss that? Thanks for taking the time to enlighten me. About the ETA, I noticed that too, which may be relevant to another discussion I saw nested under the original quotation...

Comment by Yorick_Newsome on Rationality Quotes November 2009 · 2009-11-30T06:06:38.275Z · LW · GW

Did you read it in the context of the atheist blog post Eliezer linked to? I agree that the quote was possibly meant to be cautionary, but I think it was primarily meant to show that believing in things 200 years old is generally not a good idea. Maybe I misunderstood the point of the post, though; the cautionary value is a more useful interpretation for us aspiring rationalists, and 'don't put faith in ancient wisdom' is rather simple advice by comparison. Because of that, context be damned (even if I did interpret it as was meant), I'm going to switch to your interpretation. :)

Comment by Yorick_Newsome on Rationality Quotes November 2009 · 2009-11-30T03:13:45.737Z · LW · GW

I think he was being sarcastic and trying to suggest that the original quote failed to take note that everyone thinks they are immune from those problems, including the person who decided the past was 'wrong' about them. I'm also pretty sure cousin_it is Russian, if that's relevant. The USA thing was just a tasteful addition, the way I see it. I laughed. (His use of an exclamation point and a look at the top contributors list on the right also indicate sarcasm.)

Edit: I agree with Nick below. It was just a joke. Which I enjoyed.

Comment by Yorick_Newsome on Rationality Quotes November 2009 · 2009-11-30T02:40:14.164Z · LW · GW

"the quality of being selfish, the condition of habitually putting one's own interests before those of others" - wiktionary

I can imagine a super giant mega list of situations where that would be bad, even if selfishness is often a good thing. There's a reason 'selfishness' has negative connotations.

Comment by Yorick_Newsome on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-29T11:48:46.051Z · LW · GW

I'm not Eliezer, and perhaps not being an AGI researcher means that my answer is irrelevant, but I think that things can have a deep aesthetic value or meaning from which one could gain insights into things more important than AI or rationality. One of these things may be the 'something to protect' that Eliezer wrote about. Others may be intrinsic values to discover, to give your rationality purpose. If I could only keep one of a copy of the Gospels of Buddha or a copy of MITECS, I would keep the Gospels of Buddha, because it reminds me of the importance of terminal values like compassion. When I read GEB the ideas of interconnectedness, of patterns, and of meaning all left me with a clearer thought process than did reading Eliezer's short paper on Coherent Extrapolated Volition, which was enjoyable but just didn't seem to resonate in the same way. Calling these things 'entertaining fluff' may be losing sight of Eliezer's 11th virtue: "The Art must have a purpose other than itself, or it collapses into infinite recursion."
That is all, of course, my humble opinion. Maybe having everyone read about and understand the dangers of black swans and unfriendly AI would be more productive than having them read about and understand the values of compassion and altruism; for if people do not understand the former, there may be no world left for the latter.

Comment by Yorick_Newsome on Request For Article: Many-Worlds Quantum Computing · 2009-11-29T06:50:42.697Z · LW · GW

Ah, thanks for the link. The only summit video I've seen before was Jurgen Schmidhuber's, perhaps I should watch more of 'em.

Comment by Yorick_Newsome on Open Thread: November 2009 · 2009-11-29T04:43:01.060Z · LW · GW

I would repost this in the next open thread, it's not like anyone would get annoyed at the double post (I think), and that site looks like it would interest a lot of people.

Comment by Yorick_Newsome on Open Thread: November 2009 · 2009-11-29T04:41:12.854Z · LW · GW

Perhaps there should be an 'Open Thread' link between 'Top' and 'Comments' above, so that people could get to it easily. If we're going to have an open thread, we might as well make it accessible.

Anyways, I was looking around Amazon for a book on axiology, and I started to wonder: when it comes to fields that are advancing, but not at a 'significant pace', is it better to buy older books (as they've passed the test of time) or newer ones (as they may have improved on the older books and include new info)? My intuition tells me it's better to buy newer books.

Comment by Yorick_Newsome on Request For Article: Many-Worlds Quantum Computing · 2009-11-28T14:52:19.931Z · LW · GW

Well, if the people spitting out the clever non-sequiturs have charming British accents, then possibly. Otherwise, no... is Mensa about 'debating', normally? I always figured it'd be more of a casual social meet-up. But even then I suppose it could quickly dissolve into a mere signalling competition, or a 'debate'.

Comment by Yorick_Newsome on Request For Article: Many-Worlds Quantum Computing · 2009-11-28T13:29:17.814Z · LW · GW

That's a good point. I've never actually interacted with someone in real life that even knew what philosophical zombies were, so my 'intellectual' conversations take place along the lines of 'atheism versus theism', sadly. Maybe there is some merit to joining Mensa after all?

Comment by Yorick_Newsome on Request For Article: Many-Worlds Quantum Computing · 2009-11-28T08:33:13.122Z · LW · GW

This made me chuckle. I suppose that as the intelligence and amount of knowledge held by the average member of an intellectual group goes up, their lower bounds on the amount of knowledge someone must have in order for that member to have an intellectual conversation with them goes up as well.
I'm horrible at communicating clearly, so I'll give an example.

4chan poster: You're a scientologist!? Idiot.
RationalWiki member: You're a creationist?! I refuse to speak to you.
Less Wrong member: You insist that there is such thing as waveform collapse in quantum mechanics?! I see you cannot be saved.

Comment by Yorick_Newsome on Request For Article: Many-Worlds Quantum Computing · 2009-11-28T07:57:12.871Z · LW · GW

At first I wanted to say, "Please do, that would be awesome!", but then I realized it may not be within the domain of 'refining the art of rationality'. Anyone have any rationalizations so that we could talk about quantum computing at Less Wrong? There have been posts on the singularity, after all.

Comment by Yorick_Newsome on Open Thread: November 2009 · 2009-11-26T09:03:21.687Z · LW · GW

Regarding that test, do 'real' IQ tests consist only of pattern recognition? I quite like the cryptographic and 'if a is b and b is...'-type questions found in other online IQ tests, and do well on them. I scored a good 23 points below my average on the iqtest.dk , which made me feel sad.

Comment by Yorick_Newsome on Open Thread: November 2009 · 2009-11-26T08:15:15.467Z · LW · GW

I may be the only one of my kind here, but I know absolutely nothing about probabilistic reasoning (I am envious of all you Bayesians and assume you're right about everything. Down with the frequentists!); thus, I think Jaynes would be too far over my head. Maybe there's a dichotomy between philosophy / psychology / highschool Lesswrongers and computer science / physics / math Lesswrongers that make the group of people at Jaynes-level a small group.

Comment by Yorick_Newsome on How to test your mental performance at the moment? · 2009-11-24T22:56:46.314Z · LW · GW

I play a a few 3 minute blitz chess games at FICS. That way my results are quantitative, as I can see my rating going up or down. It's also possible to play a single 3 minute blitz game and estimate how well I seem to be calculating variations and seeing simple tactics. Not the most time-efficient method, I suppose.

The main indicator of my mental state is when there are many candidate moves; if I'm tired or mentally sluggish, I will spend up to 15 precious seconds finding a strong move. When I am in good shape and am in the groove, I normally find a strong continuation in about 5 seconds.

Comment by Yorick_Newsome on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-12T01:26:30.418Z · LW · GW

Something tells me he won't answer this one. But I support the question! I'm awfully curious as well.

Comment by Yorick_Newsome on Open Thread: November 2009 · 2009-11-08T09:32:21.694Z · LW · GW

After looking at the reasoning in that article I was about to credit myself with being unintentionally deep, but I'm pretty sure that when I posed the question I was assuming a fair coin for the sake of the problem. Doh. Thanks for the interesting link.

(It's really kind of embarrassing asking questions about simple probability amongst all the decision theories and Dutch books and priors and posteriors and inconceivably huge numbers. Only way to become less wrong, I suppose.)

Comment by Yorick_Newsome on Open Thread: November 2009 · 2009-11-08T03:32:02.216Z · LW · GW

Ah, that makes a lot more sense: I was looking at the probability from the viewpoint of my guess (i.e. heads) instead of just looking at the all outcomes equally (no privileged references guesses), if you take my meaning. I also differentiated confidence in my prediction from the chance of my prediction being correct. How I managed to do that, I have no idea. Thanks for the reply.

Comment by Yorick_Newsome on Open Thread: November 2009 · 2009-11-08T02:46:17.651Z · LW · GW

I'd like to ask a moronic question or two that aren't immediately obvious to me and probably should be. (Please note, my education is very limited, especially procedural knowledge of mathematics/probability.)

If I had to guess what the result of a coin flip would be, what confidence would I place in my guess? 50% because that's the same as the probability or me being correct or 0% because I'm just randomly guessing between 2 outcomes and have no evidence to support either (well I guess there being only 2 outcomes is some kind of evidence)?

Likewise with a lottery. Would I place my confidence level (interval ? I don't know the terminology) of winning at 0% or 1/6,000,000? Or some other number entirely?

If this is something I could easily have figured out with Google or Wikipedia, my apologies. Also if my question is incoherent or flawed please let me know.