Posts

A Ketogenic Diet as an Effective Cancer Treatment? 2013-06-26T22:40:23.597Z · score: 5 (14 votes)

Comments

Comment by notsonewuser on Meetup : NYC Solstice · 2015-11-08T22:57:58.400Z · score: 0 (0 votes) · LW · GW

The date on this is wrong. It says it is on November 19, 2016, while the website says it is on December 19, 2015.

Comment by notsonewuser on September 2015 Media Thread · 2015-09-22T13:15:38.778Z · score: 0 (0 votes) · LW · GW

Just finished reading Happiness: A Very Short Introduction by Daniel M. Haybron. It was an excellent read, and well worth my time.

Comment by notsonewuser on Stupid Questions July 2015 · 2015-07-06T02:18:34.472Z · score: 1 (1 votes) · LW · GW

What is the joke behind the title "Highly Advanced Epistemology 101 for Beginners"? I understand that it's redundant, but is that the only reason why it's supposed to be funny, or is there some further underlying joke?

Edit: Or, to be clearer, why was the title not just " Highly Advanced Epistemology 101"? I understand that there may be a separate joke given the juxtaposition of "Highly Advanced" and "101".

Comment by notsonewuser on Beware of Other-Optimizing · 2015-06-30T15:10:56.986Z · score: 1 (1 votes) · LW · GW

I think that's commonly referred to as a lost purpose here on Less Wrong.

Comment by notsonewuser on Seeking Estimates for P(Hell) · 2015-03-21T16:28:05.770Z · score: 3 (3 votes) · LW · GW

I think the scenario of an AI torturing humans in the future is very, very unlikely. For most possible goals an AI could have, it will have ways to accomplish them that are more effective than torturing humans.

Comment by notsonewuser on Announcing the Complice Less Wrong Study Hall · 2015-02-20T18:29:29.222Z · score: 1 (1 votes) · LW · GW

Maybe I'll start visiting again soon!

Comment by notsonewuser on Reductionist research strategies and their biases · 2015-02-06T17:46:03.032Z · score: 4 (4 votes) · LW · GW

I don't see this as a Gish Gallop, as it doesn't even appear to me to be an argument. It just looks like a list of biases that reductionists should take extra care to avoid. The "should" part wasn't argued, just assumed.

Comment by notsonewuser on 2014 Less Wrong Census/Survey · 2014-12-23T22:18:12.364Z · score: 0 (0 votes) · LW · GW

Yes, last year. I expect with 75% confidence that it will happen again this year.

Comment by notsonewuser on State of the Solstice 2014 · 2014-12-23T21:47:38.824Z · score: 5 (5 votes) · LW · GW

I was actually in New York City on the actual date of the solstice, December 21. I'll be living there in a year from now, and this post makes me excited about taking in part in next year's event!

Comment by notsonewuser on State of the Solstice 2014 · 2014-12-23T21:45:44.923Z · score: 1 (1 votes) · LW · GW

Your comment ends with an incomplete sentence.

Comment by notsonewuser on 2014 Less Wrong Census/Survey · 2014-10-31T04:32:56.772Z · score: 27 (27 votes) · LW · GW

Took the survey. Thanks for the karma, everyone.

Comment by notsonewuser on Simulate and Defer To More Rational Selves · 2014-09-11T02:35:34.862Z · score: 7 (7 votes) · LW · GW

But Quirrell didn't cause Eliezer to write HPMOR...

Comment by notsonewuser on Simulate and Defer To More Rational Selves · 2014-09-09T02:19:22.987Z · score: 7 (7 votes) · LW · GW

This seems to be an extremely powerful method for handling decision fatigue - it's one of the few (maybe the only?) things I've seen on Less Wrong that I'm going to start applying immediately because of the potential I see in it. On the other hand, I doubt it would be so effective for me for handling social anxiety or other emotion-laden situations. A voice in my head telling me to do something that I already know I should do won't make the emotion go away, and, for me, the obstacle in these sorts of situations is definitely the emotion.

Comment by notsonewuser on Open thread, Sept. 1-7, 2014 · 2014-09-03T21:36:06.409Z · score: 6 (6 votes) · LW · GW

Out of curiosity, do you suspect (let's say with p >= .05) that lucid dreaming is unsafe? Or do you know of someone on this site who does? I'd like to know why, because I lucid dream somewhat frequently. But I don't personally see any reason to think it would be less safe than regular dreaming, especially as I see awareness while dreaming as something on a sliding scale, not a binary "yes" or "no" question.

Comment by notsonewuser on On not getting a job as an option · 2014-03-11T18:49:26.963Z · score: 4 (6 votes) · LW · GW

I want to have a job because I want to know that I'll have access to (healthy) food and (pleasant) shelter, and I don't want to live with my parents for the rest of their life.

How can I be reasonably confident that I'll have those two things without having a job?

Edit: To the person who downvoted this comment, why? It was a completely serious comment, which responded to a question Diego asked in the post.

Comment by notsonewuser on L-zombies! (L-zombies?) · 2014-02-08T15:34:51.378Z · score: 1 (1 votes) · LW · GW

Good post, but...

I imagined myself as those L-zombies as I was reading through and trying to understand. Thus they're not L-zombies anymore. Did you do the same as you were writing?

Comment by notsonewuser on February 2014 Media Thread · 2014-02-02T22:25:26.081Z · score: 1 (1 votes) · LW · GW

I've been listening to Midnight Memories, an album by One Direction. Listening to the music on this album always seems to significantly improve my mood.

Comment by notsonewuser on February 2014 Media Thread · 2014-02-02T22:07:19.701Z · score: 0 (0 votes) · LW · GW

Use rot13 for spoilers.

Comment by notsonewuser on Cynical About Cynicism · 2014-01-26T14:12:52.985Z · score: 0 (0 votes) · LW · GW

OK, I'm in complete agreement with you.

Comment by notsonewuser on Cynical About Cynicism · 2014-01-26T01:27:44.056Z · score: 0 (0 votes) · LW · GW

What do I seem confused about to you?

Comment by notsonewuser on On Not Having an Advance Abyssal Plan · 2014-01-25T21:00:07.825Z · score: 0 (0 votes) · LW · GW

I only wish I'd done that....

Comment by notsonewuser on Cynical About Cynicism · 2014-01-25T13:47:57.635Z · score: 0 (0 votes) · LW · GW

What about people who adopt children from a foreign country, rather than having their own biological children? I personally know a couple who did that. (I plan on doing the same if I get married - maybe not from a foreign country, but definitely adopting.)

Comment by notsonewuser on Cynical About Cynicism · 2014-01-24T19:51:29.402Z · score: 2 (2 votes) · LW · GW

What about religious people who take vows of celibacy?

I think people care more about self-preservation than reproduction, honestly. I know I do!

Edit: Upon reflection, and receiving some replies here, I actually think Tim made a pretty strong case. However, though "playing chess" may be the "single most helpful simple way to understand" Deep Blue's behavior, it is wrong to say this of "trying to reproduce" for human behavior. You could predict only a very small percentage of my behavior using that information (I've never kissed a girl or had sex, despite wanting to) - whereas, using "self-preservation" and "seeking novelty", you could predict quite a bit of it. I suspect this is not just true of me, but of many people.

Edit 2: Though you could poke holes in my first edit. Like, maybe the reason I don't try to reproduce now is only because I've failed in the past. But this hinges on being a violation of Tim's point. Also, see this later comment I made, which I think is pretty much a knockdown refutation.

Edit 3: See this comment by memoridem for a succinct summary of my position. Somehow, none of my comments in this conversation came out clearly.

Comment by notsonewuser on Rationality Quotes January 2014 · 2014-01-22T01:04:15.593Z · score: 7 (13 votes) · LW · GW

Atheism shows strength of mind, but only to a certain degree.

-- Blaise Pascal

Comment by notsonewuser on Higher Purpose · 2014-01-22T00:47:09.776Z · score: 0 (0 votes) · LW · GW

I'd rather an arbitrary person want to be a good person because it makes em feel good than spend all of eir free time and money on video games because they are fun. I think this post is too hard on people from the former group. After all, they're doing something good rather than something else!

Comment by notsonewuser on Rationalists Are Less Credulous But Better At Taking Ideas Seriously · 2014-01-21T21:11:29.830Z · score: 0 (0 votes) · LW · GW

Yes. I should have made that clearer. I'll edit my comment.

Comment by notsonewuser on Rationalists Are Less Credulous But Better At Taking Ideas Seriously · 2014-01-21T15:25:15.168Z · score: 19 (19 votes) · LW · GW

Going by only the data Yvain made public, and defining "experienced rationalists" as those people who have 1000 karma or more (this might be slightly different from Yvain's sample, but it looked as if most who had that much karma were in the community for at least 2 years), and looking only at those experienced rationalists who both recorded a cryonics probability and their cryonics status, we get the following data (note that all data is given in terms of percentages - so 50 means 50% confidence (1 in 2), while 0.5 means 0.5% confidence (1 in 200)):

For those who said "No - and do not want to sign up for cryonics", we have for the cryonics success probability estimate (and this is conditioning on no global catastrophe) (0.03,1,1) (this is (Q1,median,Q3)), with mean 0.849 and standard deviation 0.728. This group was size N = 32.

For those who said "No - still considering it", we have (5,5,10), with mean 7.023 and standard deviation 2.633. This group was size N = 44.

For those who wanted to but for some reason hadn't signed up yet (either not available in the area (maybe worth moving for?) or otherwise procrastinating), we have (15,25,37), with mean 32.069 and standard deviation 23.471. This group was size N = 29.

Finally, for the people who have signed up, we have (7,21.5,33), with mean 26.556 and standard deviation 22.389. This group was size N = 18.

If we put all of the "no" people together (those procrastinating, those still thinking, and those who just don't want to), we get (2,5,15), with mean 12.059 and standard deviation 17.741. This group is size N = 105.

I'll leave the interpretation of this data to Mitchell_Porter, since he's the one who made the original comment. I presume he had some point to make.

(I used Excel's population standard deviation computation to get the standard deviations. Sorry if I should have used a different computation. The sample standard deviation yielded very similar numbers.)

Comment by notsonewuser on Free to Optimize · 2014-01-19T19:01:39.865Z · score: 0 (0 votes) · LW · GW

The AI is optimizing how much money you make, not how much work you do. To determine how much the AI has helped you, I think the best way to go about it is to ask counterfactually how much money you would have made if the AI weren't there. Judging by this criterion, the first view is correct.

However, I like Eliezer's proposal of better rules quite a bit.

Comment by notsonewuser on 2013 Survey Results · 2014-01-19T18:32:29.462Z · score: 16 (16 votes) · LW · GW

Yvain - Next year, please include a question asking if the person taking the survey uses PredictionBook. I'd be curious to see if these people are better calibrated.

Comment by notsonewuser on Prolegomena to a Theory of Fun · 2014-01-17T14:36:32.117Z · score: 0 (0 votes) · LW · GW

I can personally testify that praising God is an enormously boring activity, even if you're still young enough to truly believe in God.

To each eir own. Praying was actually pretty fun, given that I thought I was getting to talk to an all-powerful superbeing who was also my best friend. Think of Calvin talking to Hobbes.

As for group singing praising God, I loved that. Singing loudly and proudly with a large group of friends is probably what I miss most of all about Christianity.

Comment by notsonewuser on Anthropic Atheism · 2014-01-13T22:51:31.471Z · score: 0 (0 votes) · LW · GW

The paths followed by probability are not the paths of causal influence, but the paths of logical implication, which run in both directions.

Yep, that was pretty dumb. Thanks for being gentle with me.

However, I still don't understand what's wrong with my conclusion in your version of Sleeping Beauty. Upon waking, Sleeping Beauty (whichever copy of her) doesn't observe anything (colored stones or otherwise) correlated with the result of the coin flip. So it seems she has to stick with her original probability of tails having been flipped, 1/2.

Next, out of curiosity, if you had participated in my red/green thought experiment in real life, how would you anticipate if you woke up in a red room (not how would you bet, because I think IRL you'd probably care about copies of you)? I just can't even physically bring myself to imagine seeing 9,999 copies of me coming out of their respective rooms and telling me they saw red, too, when I had been so confident beforehand that this very situation would not happen. Are you anticipating in the same way as me here?

Finally, let's pull out the anthropic version of your stones in a bag experiment. Let's say someone flips an unbiased coin; if it comes up heads, you are knocked out and wake up in a white room, while if it comes up tails, you are knocked out, then copied, and one of you wakes up in a white room and the other wakes up in a black room. Let's just say the person in each room (or in just the white room if that's the only one involved) is asked to guess whether the coin came up heads or tails. Let's also say, for whatever reason, the person has resolved to, if ey wakes up in the white room, guess heads. If ey wakes up in the black room, ey won't be guessing, ey'll just be right. Now, if we repeat this experiment multiple times, with different people, it will turn out that, looking at all of the different people (/copies) that actually did wake up in white rooms, it turns out that exactly half of them will have guessed right. Right now I'm just talking about watching this experiment many times from the outside. In fact, it doesn't matter with what probability the person resolves to guess heads if ey wakes up in the white room - this result holds (that around half of the guesses from white rooms will be correct, in the long run).

Now, given all of that, here's how I would reason, from the inside of this experiment, if we're doing log scores in utils (if for some reason I didn't care about copies of me, which IRL I would) for a probability of heads. Please tell me if you'd reason differently, and why:

In a black room, duh. So let's say I wake up in a white room. I'd say, well, I only want to maximize my utility. The only way I can be sure to uniquely specify myself, now that I might have been copied, is to say that I am "notsonewuser-in-a-white-room". Saying "notsonewuser" might not cut it anymore. Historically, when I've watched this experiment, "person-in-a-white-room" guesses the coin flip correctly half of the time, no matter what strategy ey has used. So I don't think I can do better than to say 1/2. So I say 1/2 and get -1 util (as opposed to an expected -1.08496... utils which I've seen historically hold up when I look at all the people in white rooms who have said a 2/3 probability of heads).

Now I also need to explain why I think this differs from the obvious situation you brought up (obvious in that the answer was obvious, not in that it wasn't a good point to make, I think it definitely was!). For one thing, looking historically at people who pick out white stones, they have been in heads-world 2/3 of the time. I don't seem to have any other coherent answer for the difference, though, to be honest (and I've already spent hours thinking about this stuff today, and I'm tired). So my reduction's not quite done, but given the points I've made here, I don't think yours is, either. Maybe you can see flaws in my reasoning, though. Please let me know if you do.

EDIT: I think I figured out the difference. In the situation where you are simply reaching into a bag, the event "I pull out a white stone." is well defined. In the situation in which you are cloned, the event "I wake up in a white room." is only well-defined when it is interpreted as "Someone who subjectively experiences being me wakes up in a white room.", and waking up in a black room is not evidence against the truth of this statement, whereas pulling out a black stone is pretty much absolute evidence that you did not pull out a white stone.

Comment by notsonewuser on Anthropic Atheism · 2014-01-13T16:19:11.329Z · score: 0 (0 votes) · LW · GW

Consider, then, the Sleeping Beauty problem with duplication instead of memory-erasure (i.e., a duplicate is made of SB if the coin lands tails). Now you can't add their utilities together anymore. At what probability (descending from 1 to 0) should a newly-woken SB start taking the bet that they're in the Tails world?

OK, if I'm interpreting this right, you mean to say that Sleeping Beauty is put to sleep, and then a coin is flipped. If it comes up tails, she is duplicated; if it comes up heads, nothing additional is done. Then, wake all copies of Sleeping Beauty up. What probability should any particular copy of Sleeping Beauty assign that the coin came up tails? If this is not the question you're asking, please clarify for me. I know you mentioned betting but let's just base this on log score and say the return is in utils so that there isn't any ambiguity. Since you're saying they don't add utilities, I'm also going to assume you mean each copy of Sleeping Beauty only cares about herself, locally.

So, given all of that, I don't see how the answer is anything but 1/2. The coin is already flipped, and fell according to the standard laws of physics. Being split or not doesn't do anything to the coin. Since each copy only cares about herself locally, in fact, why would the answer change? You might as well not copy Sleeping Beauty at all in the tails world, because she doesn't care about her copies. Her answer is still 1/2 (unless of course she knew the coin was weighted, etc.).

I mean, think about it this way. Suppose an event X was about to happen. You are put to sleep. If X happens, 10,000 copies of you are made and put into green rooms, and you are put into a red room. If X does not happen, 10,000 copies of you are made and put into red rooms, and you are put into a green room. Then all copies of you wake up. If I was 99.9% sure beforehand that X was going to happen and woke up in a red room, I'd be 99.9% sure that when I exited that room, I'd see 10,000 copies of me leaving green rooms. And if I woke up in a green room, I'd be 99.9% sure that when I exited that room, I'd see 9,999 copies of me leaving green rooms, and 1 copy of me leaving a red room. Copying me doesn't go back in time and change what happened. This reminds me of the discussion on Ultimate Newcomb's Problem, where IIRC some people thought you could change the prime-ness of a number by how you made a choice. That doesn't work there, and it doesn't work here, either.

From the outside though, there isn't a right answer. But, of course, from the inside, yes there is a right answer. From the outside you could count observer moments in a different way and get a different answer, but IRL there's only what actually happens. That's what I was trying to get at.

Now I expect I may have misinterpreted your question? But at least tell me if you think I answered my own question correctly, if it wasn't the same as yours.

Comment by notsonewuser on Anthropic Atheism · 2014-01-13T01:58:46.899Z · score: 0 (0 votes) · LW · GW

But then, in the Sleeping Beauty problem, you use a different unspecified prior, where each person produced gets an equal weight, even if this means giving different weights to different states of the world.

I'm really confused. What question are you asking? If you're asking what probability an outsider should assign to the coin coming up heads, the answer's 1/2, if that outsider doesn't have any information about the coin. nyan_sandwich implies this when ey says

(this way she gets $2 half the time instead of $1 half the time for heads).

If you're asking what probability Sleeping Beauty should assign, that depends on what the consequences of making such an assignment is. nyan_sandwich makes this clear, too.

And, finally, if you're asking for an authoritative "correct" subjective probability for Sleeping Beauty to have, I just don't think that notion makes sense, as probability is in the mind. In fact in this case if you pushed me I'd say 1/2 because as soon as the coin is flipped, it lands, the position is recorded, and Sleeping Beauty waking up and falling asleep in the future can't go back and change it. Though I'm not that sure that makes sense even here, and I know similar reasoning won't make sense in more complicated cases. In the end it all comes down to how you count but I'm not sure we have any disagreement on what actually happens during the experiment.

I say (and I think nyan_sandwich would agree), "Don't assign subjective probabilities in situations where it doesn't make a difference." This would be like asking if a tree that fell in a forest made a sound. If you count one way, you get one answer, and if you count another way, you get another. To actually be able to pay off a bet in this situation you need to decide how to count first - that is what differentiates making probability assignments here from other, "standard" situations.

I expect you disagree with something I've said here and I'd appreciate it if you flesh it out. I don't necessarily expect to change your mind and I think it's a distinct possibility you could change mine.

Comment by notsonewuser on I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription · 2014-01-13T01:22:13.848Z · score: 0 (0 votes) · LW · GW

how do you italicize in comments?

Asterisks around your *italic text* like that. There should be a "Show help" button below the comment field which will pop up a table that explains this stuff.

Isn't this why death is a fundamental problem for people?

I actually think so. I mean, I used to think of death as this horrible thing, but I realized that I will never experience being dead, so it doesn't bother me so much anymore. Not being alive bothers me, because I like being alive, but that's another story. However, I'm dying all the time, in a sense. For example, most of the daily thoughts of 10-year old me are thoughts I will never have again; particularly, because I live somewhere else now, I won't even have the same patterns being burned into my visual cortex.

I think the crux of the issue is that you believe generic "Chris H consciousness" is all that matters, no matter what platform is running it.

That's a good way of putting it. The main thing that bothers me about focusing on a "particular" person is that I (in your sense of the word) have no way of knowing whether I'm a copy (in your sense of the word) or not. But I do know that my experiences are real. So I would prefer to say not that there is a copy but that there are two originals. There is, as a matter of fact, a copy in your sense of the word, but I don't think that attribute should factor into a person's decision-making (or moral weighting of individuals). The copy has the same thoughts as the original for the same reason the original has his own thoughts! So I don't see why you consider one as being privileged, because I don't see location as being that which truly confers consciousness on someone.

Comment by notsonewuser on I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription · 2014-01-12T22:22:16.614Z · score: 0 (0 votes) · LW · GW

Why do you consider Chris Hallquist to be the same person when he wakes up in the morning as he is when he went to bed the night before (do you?)?

There are two entities; just because you made a copy doesn't mean that when you destroy the original that the original isn't changed as a result.

The original is changed. And I agree that there are two entities. But I don't see why Chris Hallquist should care about that before the split even occurs. Would you undergo the amnesia procedure (if you were convinced the tech worked, that the people were being honest, etc.) for $1000? What's the difference between that and a 5-minute long dreamless sleep (other than the fact that a dead body has magically appeared outside the room)?

Comment by notsonewuser on Anthropic Atheism · 2014-01-12T20:27:27.860Z · score: 1 (3 votes) · LW · GW

But it leaves open a trap where people accidentally sneak in whatever prior probabilities they want - I think you fall into this on the Sleeping Beauty problem.

I see this as explicitly not happening. nyan_sandwich says:

No update happens in the Doomsday Argument; both glorious futures and impending doom are consistent with my existence, their relative probability comes from other reasoning.

"Other reasoning" including whatever prior probabilities were there before.

Comment by notsonewuser on I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription · 2014-01-12T20:03:44.635Z · score: 1 (1 votes) · LW · GW

Both "Chris" and "copy of Chris" are Chris Hallquist. Both remember being Chris Hallquist, which is the only way anyone's identity ever persists. Copy of Chris would insist that he's Chris Hallquist for the same reason the original Chris would insist so. And as far as I'm concerned, they'd both be right - because if you weren't in the room when the copying process happened, you'd have no way of telling the difference. I don't deny that as time passes they gradually would become different people.

I prefer to frame things this way. Suppose you take Chris Hallquist and scan his entire body and brain such that you could rebuild it exactly the same way later. Then you wait 5 minutes and then kill him. Now you use the machine to rebuild his body and brain. Is Chris Hallquist dead? I would say no - it would basically be the same as if he had amnesia - I would prefer to experience amnesia than to be killed, and I definitely don't anticipate having the same experiences in either case. Yet your view seems to imply that, since the original was killed, despite having a living, talking Chris Hallquist in front of you, it's somehow not really him.

Edit: Moreover, if I was convinced the technology worked as advertised, I would happily undergo this amnesia process for even small amounts of money, say, $100. Just to show that I actually do believe what I'm saying.

Comment by notsonewuser on I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription · 2014-01-12T15:41:33.166Z · score: 0 (0 votes) · LW · GW

But why would Chris Hallquist care about this "fundamental principle of identity", if it makes no difference to his experiences?

Comment by notsonewuser on I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription · 2014-01-12T12:20:08.355Z · score: 0 (0 votes) · LW · GW

Since the copy of Chris Hallquist would say "I am Chris Hallquist" for the same reason Chris Hallquist says "I am Chris Hallquist", I would say that the copy of Chris Hallquist just is Chris Hallquist in every way. So Chris Hallquist still has Chris Hallquist's consciousness in the cryonics scenario. In the computer scenario, both Chris Hallquist in the flesh and Chris Hallquist on the computer have Chris Hallquist's consciousness. Over time they might become different versions of Chris Hallquist if exposed to different things, but at the start, from the inside, it seems the same to both.

Comment by notsonewuser on I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription · 2014-01-11T21:22:01.206Z · score: 1 (3 votes) · LW · GW

I think machines can have consciousness, and I think a copy of you can have consciousness, but you can't have the consciousness of your copy, and it seems to me that after death and freezing you would get a copy of you, which would be perhaps be good for a number of reasons, but not for the reason (I presume) is most important--for you (your consciousness) to become immortal.

A copy of you is identical to you. Therefore I don't see how a copy of you could not have your consciousness.

Comment by notsonewuser on I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription · 2014-01-11T21:14:02.171Z · score: 3 (7 votes) · LW · GW

You have read the full kalla724 thread, right?

I think V_V's comment is sufficient for you to retract your cryonics subscription. If we get uFAI you lose anyways, so I would be putting my money into that and other existential risks. You'll benefit a lot more people that way.

Comment by notsonewuser on Recognizing Intelligence · 2014-01-05T19:11:30.731Z · score: 1 (1 votes) · LW · GW

You are very, very close to simply restating the Watchmaker Argument in favor of the existence of a Divine Being.

Not at all. The problem with the Watchmaker Argument wasn't the observation that humans are highly optimized; it was the conclusion that, therefore, it was God. And God is a very different hypothesis from alien intelligence in a universe we already know has the capability of producing intelligence.

Comment by notsonewuser on Back Up and Ask Whether, Not Why · 2014-01-05T18:58:08.790Z · score: -1 (1 votes) · LW · GW

From the standpoint of TDT, I see using Dvorak as the obvious choice, and teaching your kids Dvorak rather than QWERTY, etc. Anything but QWERTY.

Comment by notsonewuser on Economic Definition of Intelligence? · 2014-01-05T15:42:59.948Z · score: 0 (0 votes) · LW · GW

But if you were trying random solutions and the solution tester was a black box

Then you're not solving the same optimization problem anymore. If the black box just had two outputs, "good" and "bad", then, yes, a black box that accepts fewer input sequences is going to be one that is harder to make accept. On the other hand, if the black box had some sort of metric on a scale from "bad" going up to "good", and the optimizer could update on the output each time, the sequence problem is still going to be much easier than the MP3 problem.

Comment by notsonewuser on Inner Goodness · 2014-01-04T19:45:39.653Z · score: 2 (2 votes) · LW · GW

They will object to the idea of founding the AI on human morals in any way, saying, "But humans are such awful creatures," not realizing that it is only humans who have ever passed such a judgment.

This is a gem.

Comment by notsonewuser on The AI in a box boxes you · 2014-01-04T19:38:04.720Z · score: 0 (0 votes) · LW · GW

I'll hazard a guess, and say no. Remember that the Gatekeeper is allowed to just drop out of character. See this post for more.

Comment by notsonewuser on Timeless Decision Theory and Meta-Circular Decision Theory · 2014-01-04T18:56:47.395Z · score: 1 (1 votes) · LW · GW

It means "If A were true, then O would be true." Note that this is a counterfactual statement.

Comment by notsonewuser on Inner Goodness · 2014-01-04T16:24:27.438Z · score: 0 (0 votes) · LW · GW

This post seems to say I should look in the mirror to get my answer

I'd agree with that. However, I doubt Eliezer would agree that you should only look in the mirror. Perhaps we can steel-man the concept: Look in the mirror, holding up the evidence you've gone out into the world to collect. At that point, if you see two pieces of paper with possible answers to your question float by, it will be you who chooses which answer is better. Even if one of the pieces of paper just says "Do whatever your parents tell you to", it would still be you who chose to listen to that piece of paper rather than another one. (Eliezer makes this analogy somewhere (and he does a better job than I did), but I couldn't find it; otherwise, I would have cited it.)

Comment by notsonewuser on Inner Goodness · 2014-01-04T15:52:43.077Z · score: 0 (0 votes) · LW · GW

The "ethicists should be inside rather than outside a profession" link is broken. You can find it archived here.

Comment by notsonewuser on January 2014 Media Thread · 2014-01-02T16:31:58.233Z · score: 2 (2 votes) · LW · GW

I've been listening to Random Access Memories, an album by Daft Punk. I've found it quite useful for getting into a "flow" state while working, and I enjoy listening to the music recreationally, as well.