Posts

Comments

Comment by simpleton on Brief question about Conway's Game of LIfe and AI · 2011-06-02T21:34:00.609Z · LW · GW

Conway’s Game of Life is Turing-complete. Therefore, it is possible to create an AI in it. If you created a 3^^3 by 3^^3 Life board, setting the initial state at random, presumably somewhere an AI would be created.

I don't think Turing-completeness implies that.

Consider the similar statement: "If you loaded a Turing machine with a sufficiently long random tape, and let it run for enough clock ticks, an AI would be created." This is clearly false: Although it's possible to write an AI for such a machine, the right selection pressures don't exist to produce one this way; the machine is overwhelmingly likely to just end up in an uninteresting infinite loop.

Likewise, the physics of Life are most likely too impoverished to support the evolution of anything more than very simple self-replicating patterns.

Comment by simpleton on Science Fiction Recommendations · 2011-04-05T21:02:07.843Z · LW · GW

Stephenson remains one of my favorites, even though I failed at several attempts to enjoy his Baroque Cycle series. Anathem is as good as his pre-Baroque-Cycle work.

Comment by simpleton on 12-year old challenges the Big Bang · 2011-03-29T07:34:28.665Z · LW · GW

Poor kid. He's a smart 12 year old who has some silly ideas, as smart 12 year olds often do, and now he'll never be able to live them down because some reporter wrote a fluff piece about him. Hopefully he'll grow up to be embarrassed by this, instead of turning into a crank.

His theories as quoted in the article don't seem to be very coherent -- I can't even tell if he's using the term "big bang" to mean the origin of the universe or a nova -- so I don't think there's much of a claim to be evaluated here.

Of course, it's very possible that the reporter butchered the quote. It's a human interest article and it's painfully obvious that the reporter parsed every word out of the kid's mouth as science-as-attire, with no attempt to understand the content.

Comment by simpleton on Research methods · 2011-02-22T21:06:58.536Z · LW · GW

For those of you who watch Breaking Bad, the disaster at the end of Season 3 probably wouldn't have happened if the US adopted a similar system.

When I saw that episode, my first thought was that it would be extraordinarily unlikely in the US, no matter how badly ATC messed up. TCAS has turned mid-air collisions between airliners into an almost nonexistent type of accident.

Comment by simpleton on Sunk Cost Fallacy · 2011-01-13T03:00:31.758Z · LW · GW

This does happen a lot among retail investors, and people don't think about the reversal test nearly often enough.

There's a closely related bias which could be called the Sunk Gain Fallacy: I know people who believe that if you buy a stock and it doubles in value, you should immediately sell half of it (regardless of your estimate of its future prospects), because "that way you're gambling with someone else's money". These same people use mottos like "Nobody ever lost money taking a profit!" to justify grossly expected-value-destroying actions like early exercise of options.

However, a bias toward holding what you already own may be a useful form of hysteresis for a couple of reasons:

  • There are expenses, fees, and tax consequences associated with trading. Churning your investments is almost always a bad thing, especially since the market is mostly efficient and whatever you're holding will tend to have the same expected value as anything else you could buy.

  • Human decisionmaking is noisy. If you wake up every morning and remake your investment portfolio de novo, the noise will dominate. If you discount your first-order conclusions and only change your strategy at infrequent intervals, after repeated consideration, or only when you have an exceptionally good reason, your strategy will tend towards monotonic improvement.

Comment by simpleton on Reliably wrong · 2010-12-09T21:24:32.930Z · LW · GW

It's common in certain types of polemic. People hold (or claim to hold) beliefs to signal group affiliation, and the more outlandishly improbable the beliefs become, the more effective they are as a signal.

It becomes a competition: Whoever professes beliefs which most strain credibility is the most loyal.

Comment by simpleton on Short versions of the basic premise about FAI · 2010-11-01T03:13:27.088Z · LW · GW

Sorry, I thought that post was a pretty good statement of the Friendliness problem, sans reference to the Singularity (or even any kind of self-optimization), but perhaps I misunderstood what you were looking for.

Comment by simpleton on Short versions of the basic premise about FAI · 2010-10-31T23:53:24.251Z · LW · GW

The Hidden Complexity of Wishes

Comment by simpleton on 23andme genome analysis - $99 today only · 2010-04-24T00:26:31.514Z · LW · GW

Argh. I'd actually been thinking about getting a 23andme test for the last week or so but was put off by the price. I saw this about 20 minutes too late (it apparently ended at midnight UTC).

Comment by simpleton on Disclosure vs. Bans: Reply to Robin Hanson · 2010-01-06T03:59:15.263Z · LW · GW

In practice, you can rarely use GPLed software libraries for development unless you work for a nonprofit.

That's a gross overgeneralization.

Comment by simpleton on If reason told you to jump off a cliff, would you do it? · 2009-12-21T07:04:32.829Z · LW · GW

Yes.

The things Shalmanese is labeling "reason" and "evidence" seem to closely correspond to what have been previously been called the inside view and outside view, respectively (both of which are modes of reasoning, under the more common definition).

Comment by simpleton on The Contrarian Status Catch-22 · 2009-12-20T23:28:37.757Z · LW · GW

Quite the opposite, under the technical definition of simplicity in the context of Occam's Razor.

Comment by simpleton on Why Many-Worlds Is Not The Rationally Favored Interpretation · 2009-09-29T16:17:29.156Z · LW · GW

MWI completely fails if any such non-linearities are present, while other theories can handle them. [...] It can collapse with one experiment, and I'm not betting against such experiment happening in my lifetime at odds higher than 10:1.

So you're saying MWI tells us what to anticipate more specifically (and therefore makes itself more falsifiable) than the alternatives, and that's a point against it?

Comment by simpleton on Misleading the witness · 2009-08-10T06:02:23.082Z · LW · GW

And the best workaround you can come up with is to walk away from the money entirely? I don't buy it.

If you go through life acting as if your akrasia is so immutable that you have to walk away from huge wins like this, you're selling yourself short.

Even if you're right about yourself, you can just keep $1000 [edit: make that $3334, so as to have a higher expected value than a sure $500] and give the rest away before you have time to change your mind. Or put the whole million in an irrevocable trust. These aren't even the good ideas; they're just the trivial ones which are better than what you're suggesting.

Comment by simpleton on Misleading the witness · 2009-08-10T03:34:10.834Z · LW · GW

Being aware of that tendency should make it possible to avoid ruination without forgoing the money entirely (e.g. by investing it wisely and not spending down the principal on any radical lifestyle changes, or even by giving all of it away to some worthy cause).

Comment by simpleton on The Strangest Thing An AI Could Tell You · 2009-07-15T06:07:19.749Z · LW · GW

Well, I wouldn't rule out any of:

1) I and the AI are the only real optimization processes in the universe.

2) I-and-the-AI is the only real optimization process in the universe (but the AI half of this duo consistently makes better predictions than "I" do).

3) The concept of personal identity is unsalvageably confused.

Comment by simpleton on The Strangest Thing An AI Could Tell You · 2009-07-15T05:30:06.514Z · LW · GW

If we have this incapability, what explains the abundant fiction in which nonhuman animals (both terrestrial and non) are capable of speech, and childhood anthropomorphization of animals?

That's not anthropomorphization.

Can you teach me to talk to the stray cat in my neighborhood?

Sorry, you're too old. Those childhood conversations you had with cats were real. You just started dismissing them as make-believe once your ability to doublethink was fully mature.

All of the really interesting stuff, from before you could doublethink at all, has been blocked out entirely by infantile amnesia.

Comment by simpleton on The Strangest Thing An AI Could Tell You · 2009-07-15T05:11:41.387Z · LW · GW

I would believe that human cognition is much, much simpler than it feels from the inside -- that there are no deep algorithms, and it's all just cache lookups plus a handful of feedback loops which even a mere human programmer would call trivial.

I would believe that there's no way to define "sentience" (without resorting to something ridiculously post hoc) which includes humans but excludes most other mammals.

I would believe in solipsism.

I can hardly think of any political, economic, or moral assertion I'd regard as implausible, except that one of the world's extant religions is true (since that would have about as much internal consistency as "2 + 2 = 3").

Comment by simpleton on The Wire versus Evolutionary Psychology · 2009-05-26T15:43:42.358Z · LW · GW

The actual quote didn't contain the word "beat" at all. It was "Count be wrong, they fuck you up."

Comment by simpleton on This Failing Earth · 2009-05-26T00:48:24.459Z · LW · GW

The fact that we find ourselves in a world which has not ended is not evidence.

Comment by simpleton on Off Topic Thread: May 2009 · 2009-05-25T06:12:29.962Z · LW · GW

lesswrong.com's web server is in the US but both of its nameservers are in Australia, leading to very slow lookups for me -- often slow enough that my resolver times out (and caches the failure).

I am my own DNS admin so I can work around this by forcing a cache flush when I need to, but I imagine this would be a more serious problem for people who rely on their ISPs' DNS servers.

Comment by simpleton on A Request for Open Problems · 2009-05-10T19:13:24.611Z · LW · GW

Quite a bit is known about the neurology behind face recognition. No one understands the algorithm well enough to build a fusiform gyrus from scratch, but that doesn't mean the fact that there is an algorithm is mysterious.

Comment by simpleton on Open Thread: May 2009 · 2009-05-02T04:39:52.792Z · LW · GW

Each post under http://lesswrong.com/user/yourname/hidden/ should have an Unhide link.

Comment by simpleton on This Didn't Have To Happen · 2009-04-24T23:01:32.687Z · LW · GW

Thanks, it looks like I misremembered -- if they're now doing perfusion after neuroseparation then it's much more likely to be compatible with organ donation.

I've sent Alcor a question about this.

Comment by simpleton on This Didn't Have To Happen · 2009-04-24T04:07:58.094Z · LW · GW

This is the only reason I haven't signed up.

What I want to do is sign up for neuropreservation and donate any organs and tissues from the neck down, but as far as I can tell that's not even remotely feasible. Alcor's procedure involves cooling the whole body to 0C and injecting the cryoprotectant before removing the head (and I can understand why perfusion would be a lot easier while the head is still attached). Also, I think it's doubtful that the cryonics team and the transplant team would coordinate with each other effectively, even if there were no technical obstacles.

Comment by simpleton on Fix it and tell us what you did · 2009-04-23T17:35:00.884Z · LW · GW

Are we developing a new art of akrasia-fighting, or is this just repackaged garden-variety self-help?

Edit: I don't mean to disparage anyone's efforts to improve themselves. (My only objection to the field of "self-help" is that it's dominated by charlatans.) But there is an existing body of science here, and I fear that if we go down this road the Art of Rationality will turn into nothing more than amateur behavioral psychology.

Comment by simpleton on Extreme Rationality: It's Not That Great · 2009-04-09T04:22:51.822Z · LW · GW

If in 1660 you'd asked the first members of the Royal Society to list the ways in which natural philosophy had tangibly improved their lives, you probably wouldn't have gotten a very impressive list.

Looking over history, you would not have found any tendency for successful people to have made a formal study of natural philosophy.

Comment by simpleton on Rationality, Cryonics and Pascal's Wager · 2009-04-08T22:01:44.549Z · LW · GW

Alcor says they have a >50% incidence of poor cases.

Comment by simpleton on 3 Levels of Rationality Verification · 2009-03-16T23:37:22.155Z · LW · GW

I strongly second the idea of using real science as a test. Jeffreyssai wouldn't be satisfied with feeding his students -- even the beginners -- artificial puzzles all day. Artificial puzzles are shallow.

It wouldn't even have to be historical science. Science is still young enough that there's a lot of low-hanging fruit. I don't think we have a shortage of scientific questions which are genuinely unanswered, but can be recognized as answerable in a moderate amount of time by a beginner or intermediate student.

Comment by simpleton on It's the Same Five Dollars! · 2009-03-08T19:21:47.660Z · LW · GW

There's a heuristic at work here which isn't completely unreasonable.

I buy $15 items on a daily basis. If I form a habit of ignoring a $5 savings on such purchases, I'll be wasting a significant fraction of my income. I buy $125 items rarely enough that I can give myself permission to splurge and avoid the drive across town.

The percentage does matter -- it's a proxy for the rate at which the savings add up.

It's also a proxy for the importance of the savings relative to other considerations, which are often proportional to the value of what you're buying. If you were about to sign the papers on a $20000 car purchase, would you walk away at the last minute if you found out that an identical car was available from another dealer for $19995? Would you try to explicitly weigh the $5 against intangibles such as your level of trust in the first dealer compared to the second, or would you be right to regard the $5 as a distraction and ignore it?