Posts

Comments

Comment by Craig_Morgan on Positioning oneself to make a difference · 2010-08-20T07:03:43.917Z · LW · GW

Hello from Perth! I'm 27, have a computer science background, and have been following Eliezer/Overcoming Bias/Less Wrong since finding LOGI circa 2002. I've also been thinking how I can "position myself to make a difference", and have finally overcome my akrasia; here's what I'm doing.

I'll be attending the 2010 Machine Learning Summer School and Algorithmic Learning Theory Conference for a few reasons:

  • To meet and get to know some people in the AI community. Marcus Hutter will presenting his talk on Universal Artificial Intelligence at MLSS2010.
  • To immerse myself in the current topics of the AI research community.
  • To figure out whether I'm capable of contributing to that research.
  • To figure out whether contributing to that research will actually help in the building of a FAI.
Comment by Craig_Morgan on Open Thread June 2010, Part 3 · 2010-06-15T03:35:31.050Z · LW · GW

I have not been convinced but am open toward the idea that a paperclip maximizer is the overwhelmingly likely outcome if we create a superhuman AI. At present, my thinking is that if some care is taking in the creation of a superhuman AI, more likely than a paperclip maximizer is an AI which partially shares human values, that is, the dicotomy "paper clip maximizer vs. Friendly AI" seems like a false dicotomy - I imagine that the sort of AI that people would actually build would be somewhere in the middle. Any recommended reading on this point appreciated.

I believed similarly until I read Steve Omohundro's The Basic AI Drives. It convinced me that a paperclip maximizer is the overwhelmingly likely outcome of creating an AGI.

Comment by Craig_Morgan on Positive Bias Test (C++ program) · 2009-05-21T05:05:37.014Z · LW · GW

It is absolutely NOT a trick question.

There are an infinite number of hypotheses for what an 'Awesome Triplet' could be. Here are some example hypotheses that could be true based on our initial evidence '2 4 6 is an awesome triplet':

  1. Any three integers
  2. Any three integers in ascending order
  3. Three successive multiples of the same number
  4. The sequence '2 4 6'
  5. Three integers not contained in the set '512 231123 691 9834 91238 1'

We cannot falsify every possible hypothesis, so we need a strategy to falsify hypotheses, starting from the most likely. All hypotheses are not created equal.

I want to falsify as much of the hypotheses-space as possible (where simple hypthoses take up more space), so I design a test that should do so. My first test was '3 integers in descending order', because it can falsify #1, the simplest hypothesis. I find from this test that #1 is false. My second test is to distinguish between #2 & #3; '3 integers in ascending order, but not successive multiples of the same number', '1 2 5' I find from this test that #2 is still plausible, but #3 is falsified.

You can continue falsifying smaller and smaller areas of the hypothesis-space with additional tests, up until you're happy with your confidence level or you're bored of testing.

For much better coverage of this entire area, see the following posts by Eliezer:

For a good overview of additional related posts, see the list.

Edit: Learning Markdown, fixing style.

Comment by Craig_Morgan on Survey Results · 2009-05-14T02:34:50.527Z · LW · GW

At the time of this comment, thomblake's above comment is at -3 points and there are no comments arguing against his opinion, or why he is wrong. We should not downvote a comment simply because we disagree with it. Thomblake expressed an opinion that differs (I presume) from the community majority. A better response to such an expressed opinion is to present arguments that correct his belief. Voting based on agreement/disagreement will lead people not to express viewpoints they believe differ from the community's.

Comment by Craig_Morgan on Survey Results · 2009-05-13T08:48:43.162Z · LW · GW

I agree with your point, but just because someone can't enumerate 299 possibilities, does not mean they should not reserve probability space for unknown unknowns. Put another way, in calculating these odds you must leave room for race-ending catastrophes that you didn't even imagine. I believe this point is important, that we succumb to multiple biases in this area, and that these biases have affected the decision-making of many rationalists. I am preparing a Less Wrong post on this and related topics.