Posts

Comments

Comment by bsterrett on The Ultimate Newcomb's Problem · 2013-09-11T22:38:34.590Z · LW · GW

Of course! I meant to say that Richard's line of thought was mistaken because it didn't take into account the (default) independence of Omega's choice of number and the Number Lottery's choice of number. Suggesting that there are only two possible strategies for approaching this problem was a consequence of my poor wording.

Comment by bsterrett on The Ultimate Newcomb's Problem · 2013-09-10T20:55:35.318Z · LW · GW

Sorry for my poor phrasing. The Number Lottery's number is randomly chosen and has nothing to do with Omega's prediction of you as a two-boxer or one-boxer. It is only Omega's choice of number that depends on whether it believes you are a one-boxer or two-boxer. Does this clear it up?

Note that there is a caveat: if your strategy for deciding to one-box or two-box depends on the outcome of the Number Lottery, then Omega's choice of number and the Lottery's choice of number are no longer independent.

Comment by bsterrett on The Ultimate Newcomb's Problem · 2013-09-10T20:18:05.928Z · LW · GW

I think this line of reasoning relies on the Number Lottery's choice of number being conditional on Omega's evaluation of you as a one-boxer or two-boxer. The problem description (at the time of this writing) states that the Number Lottery's number is randomly chosen, so it seems like more of a distraction than something you should try to manipulate for a better payoff.

Edit: Distraction is definitely the wrong word. As ShardPhoenix indicated, you might be able to get a better payoff by making your one-box / two-box decision depend on the outcome of the Number Lottery.

Comment by bsterrett on Anybody want to join a Math Club? · 2013-04-05T14:46:48.013Z · LW · GW

I have a copy of Probability Theory, but I've never made a solid effort to go through it. I'd love to commit to a group reading. Definitely interested.

Comment by bsterrett on Programming the LW Study Hall · 2013-03-18T21:24:34.307Z · LW · GW

This project now has a small team, but we'd love to get some more collaborators! You wouldn't be taking this on single-handedly. Anyone who is interested should PM me.

I plan to use one of the current mockups like tinychat while development is underway. We are still evaluating different approaches, so we won't be able to use the product of our work to host the study hall in the very short term. We'll definitely make a public announcement when we have something that users could try.

Comment by bsterrett on Don't Build Fallout Shelters · 2013-03-13T20:50:44.250Z · LW · GW

One can surely argue the inability of hostile forces to build and deploy a nuke is significant: seems some relationship exists between the intellect needed to make these things and the intellect needed to refuse to make or deploy these.

Could you state the relationship more explicitly? Your implication is not clear to me.

Comment by bsterrett on How to offend a rationalist (who hasn't thought about it yet): a life lesson · 2013-02-14T19:08:10.720Z · LW · GW

I was recently reflecting on an argument I had with someone where they expressed an idea to me that made me very frustrated, though I don't think I was as angry as you described yourself after your own argument. I judged them to be making a very basic mistake of rationality and I was trying to help them to not make the mistake. Their response implied that they didn't think they had executed a flawed mental process like I had accused them of, and even if they had executed a mental process like the one I described, it would not necessarily be a mistake. In the moment, I took this response to be a complete rejection of rationality (or something like that), and I became slightly angry and very frustrated.

I realized afterwards that a big part of what upset me was that I was trying to do something that I felt would be helpful to this person and everyone around them and possibly the world at large, yet they were rejecting it for no reason that I could identify in the moment. (I know that my pushiness about rationality can make the world at large worse instead of better, but this was not on my mind in the moment.) I was thinking of myself as being charitable and nice, and I was thinking of them as inexplicably not receptive. On top of this, I had failed to liaise even decently on behalf of rationalists, and I had possibly turned this person off to the study of rationality. I think these things upset me more than I ever could have realized while the argument was still going on. Perhaps you felt some of this as well? I don't expect these considerations to account for all of the emotions you felt, but I would be surprised if they were totally uninvolved.

Comment by bsterrett on The Evil AI Overlord List · 2012-11-20T19:39:02.867Z · LW · GW

59: I will never build a sentient computer smarter than I am.

Comment by bsterrett on A summary of the Hanson-Yudkowsky FOOM debate · 2012-11-16T18:47:09.301Z · LW · GW

For anyone following the sequence rerun going on right now, this summary is highly recommended. It is much more manageable than the blog posts, and doesn't leave out anything important (that I noticed).

Comment by bsterrett on [SEQ RERUN] Brain Emulation and Hard Takeoff · 2012-11-15T18:45:39.290Z · LW · GW

Has there been some previous discussion of reliance on custom hardware? My cursory search didn't turn anything up.

Comment by bsterrett on [SEQ RERUN] AI Go Foom · 2012-11-10T21:12:07.576Z · LW · GW

To your first objection, I agree that "the gradient may not be the same in the two," when you are talking about chimp-to-human growth and human-to-superintelligence growth. But Eliezer's stated reason mostly applies to the areas near human intelligence, as I said. There is no consensus on how far the "steep" area extends, so I think your doubt is justified.

Your second objection also sounds reasonable to me, but I don't know enough about evolution to confidently endorse or dispute it. To me, this sounds similar to a point that Tim Tyler tries to make repeatedly in this sequence, but I haven't investigated his views thoroughly. I believe his stance is as follows: since a human selects a mate using their brain, and intelligence is so necessary for human survival, and sexual organisms want to pick fit mates, there has been a nontrivial feedback loop caused by humans using their intelligence to be good at selecting intelligent mates. Do you endorse this? (I am not sure, myself.)

Comment by bsterrett on [SEQ RERUN] AI Go Foom · 2012-11-09T19:30:09.082Z · LW · GW

Eliezer's stated reason, as I understand it, is that evolution's work to increase the performance of the human brain did not suffer diminishing returns on the path from roughly chimpanzee brains to current human brains. Actually, there was probably a slightly greater-than-linear increase in human intelligence per unit of evolutionary time.

If we also assume that evolution did not have an increasing optimization pressure which could account for the nonlinear trend (which might be an assumption worth exploring; I believe Tim Tyler would deny this), then this suggests that the slope of 'intelligence per optimization pressure applied' is steep around the level of human intelligence, from the perspective of a process improving an intelligent entity. I am not sure this translates perfectly into your formulation using x's and y's, but I think it is a sufficiently illustrative answer to your question. It is not a very concrete reason to believe Eliezer's conclusion, but it is suggestive.

Comment by bsterrett on 2012 Less Wrong Census/Survey · 2012-11-06T21:07:13.373Z · LW · GW

I found that leaving a question and coming back to it was much more helpful than trying to focus on it. There were several questions that I made no progress on for a few minutes, but I could immediately solve them upon returning to them.

Comment by bsterrett on 2012 Less Wrong Census/Survey · 2012-11-06T21:01:40.944Z · LW · GW

To clarify: this involves selecting the hyperlink text with your mouse, but not releasing your mouse button, and then copying the text while it is still selected.

"Keeping it selected" is the default behavior of the browser which does not seem to be working.

Comment by bsterrett on 2012 Less Wrong Census/Survey · 2012-11-06T14:56:17.952Z · LW · GW

I took the survey! Karma, please!

Never done an IQ test before. I thought it was fun! Now I want to take one of the legitimate ones.

Comment by bsterrett on Logical Pinpointing · 2012-11-03T20:07:31.506Z · LW · GW

I can't imagine a universe without mathematics, yet I think mathematics is meaningful. Doesn't this mean the test is not sufficient to determine the meaningfulness of a property?

Is there some established thinking on alternate universes without mathematics? My failure to imagine such universes is hardly conclusive.

Comment by bsterrett on Logical Pinpointing · 2012-11-03T17:40:11.841Z · LW · GW

Like army1987 notes, it is an instruction and not a statement. Considering that, I think "if X is just, then do X" is a good imperative to live by, assuming some good definition of justice. I don't think I would describe it as "wrong" or "correct" at this point.

Comment by bsterrett on Logical Pinpointing · 2012-11-01T21:32:18.854Z · LW · GW

I am not entirely sure how you arrived at the conclusion that justice is a meaningful concept. I am also unclear on how you know the statement "If X is just, then do X" is correct. Could you elaborate further?

In general, I don't think it is a sufficient test for the meaningfulness of a property to say "I can imagine a universe which has/lacks this property, unlike our universe, therefore it is meaningful."

Comment by bsterrett on Checking Kurzweil's track record · 2012-10-31T17:45:54.449Z · LW · GW

I'll do 10.

What is the error-checking process? Will we fix any mistakes in our verdicts via an LW discussion after they have been gathered?

Comment by bsterrett on Permitted Possibilities, & Locality · 2012-10-26T14:55:37.299Z · LW · GW

I recently read the wiki article on criticality accidents, and it seems relevant here. "A criticality accident, sometimes referred to as an excursion or a power excursion, is the unintentional assembly of a critical mass of a given fissile material, such as enriched uranium or plutonium, in an unprotected environment."

Assuming Eliezer's analysis is correct, we cannot afford even 1 of these in the domain of self-improving AI. Thankfully, its harder to accidentally create a self-improving AI than it is to drop a brick in the wrong place at the wrong time.

Comment by bsterrett on Looking for alteration suggestions for the official Sequences ebook · 2012-10-18T19:01:21.851Z · LW · GW

I think this title sounds better if you are already familiar with the sequences. The importance and difficulty of changing your mind are not likely to be appreciated by people outside this community.

Comment by bsterrett on The Fabric of Real Things · 2012-10-15T17:53:16.809Z · LW · GW

What is the difference between constraining experience and constraining expectations? Is there one?