Posts
Comments
Conway’s Game of Life is Turing-complete. Therefore, it is possible to create an AI in it. If you created a 3^^3 by 3^^3 Life board, setting the initial state at random, presumably somewhere an AI would be created.
I don't think Turing-completeness implies that.
Consider the similar statement: "If you loaded a Turing machine with a sufficiently long random tape, and let it run for enough clock ticks, an AI would be created." This is clearly false: Although it's possible to write an AI for such a machine, the right selection pressures don't exist to produce one this way; the machine is overwhelmingly likely to just end up in an uninteresting infinite loop.
Likewise, the physics of Life are most likely too impoverished to support the evolution of anything more than very simple self-replicating patterns.
Stephenson remains one of my favorites, even though I failed at several attempts to enjoy his Baroque Cycle series. Anathem is as good as his pre-Baroque-Cycle work.
Poor kid. He's a smart 12 year old who has some silly ideas, as smart 12 year olds often do, and now he'll never be able to live them down because some reporter wrote a fluff piece about him. Hopefully he'll grow up to be embarrassed by this, instead of turning into a crank.
His theories as quoted in the article don't seem to be very coherent -- I can't even tell if he's using the term "big bang" to mean the origin of the universe or a nova -- so I don't think there's much of a claim to be evaluated here.
Of course, it's very possible that the reporter butchered the quote. It's a human interest article and it's painfully obvious that the reporter parsed every word out of the kid's mouth as science-as-attire, with no attempt to understand the content.
For those of you who watch Breaking Bad, the disaster at the end of Season 3 probably wouldn't have happened if the US adopted a similar system.
When I saw that episode, my first thought was that it would be extraordinarily unlikely in the US, no matter how badly ATC messed up. TCAS has turned mid-air collisions between airliners into an almost nonexistent type of accident.
This does happen a lot among retail investors, and people don't think about the reversal test nearly often enough.
There's a closely related bias which could be called the Sunk Gain Fallacy: I know people who believe that if you buy a stock and it doubles in value, you should immediately sell half of it (regardless of your estimate of its future prospects), because "that way you're gambling with someone else's money". These same people use mottos like "Nobody ever lost money taking a profit!" to justify grossly expected-value-destroying actions like early exercise of options.
However, a bias toward holding what you already own may be a useful form of hysteresis for a couple of reasons:
There are expenses, fees, and tax consequences associated with trading. Churning your investments is almost always a bad thing, especially since the market is mostly efficient and whatever you're holding will tend to have the same expected value as anything else you could buy.
Human decisionmaking is noisy. If you wake up every morning and remake your investment portfolio de novo, the noise will dominate. If you discount your first-order conclusions and only change your strategy at infrequent intervals, after repeated consideration, or only when you have an exceptionally good reason, your strategy will tend towards monotonic improvement.
It's common in certain types of polemic. People hold (or claim to hold) beliefs to signal group affiliation, and the more outlandishly improbable the beliefs become, the more effective they are as a signal.
It becomes a competition: Whoever professes beliefs which most strain credibility is the most loyal.
Sorry, I thought that post was a pretty good statement of the Friendliness problem, sans reference to the Singularity (or even any kind of self-optimization), but perhaps I misunderstood what you were looking for.
Argh. I'd actually been thinking about getting a 23andme test for the last week or so but was put off by the price. I saw this about 20 minutes too late (it apparently ended at midnight UTC).
In practice, you can rarely use GPLed software libraries for development unless you work for a nonprofit.
That's a gross overgeneralization.
Yes.
The things Shalmanese is labeling "reason" and "evidence" seem to closely correspond to what have been previously been called the inside view and outside view, respectively (both of which are modes of reasoning, under the more common definition).
Quite the opposite, under the technical definition of simplicity in the context of Occam's Razor.
MWI completely fails if any such non-linearities are present, while other theories can handle them. [...] It can collapse with one experiment, and I'm not betting against such experiment happening in my lifetime at odds higher than 10:1.
So you're saying MWI tells us what to anticipate more specifically (and therefore makes itself more falsifiable) than the alternatives, and that's a point against it?
And the best workaround you can come up with is to walk away from the money entirely? I don't buy it.
If you go through life acting as if your akrasia is so immutable that you have to walk away from huge wins like this, you're selling yourself short.
Even if you're right about yourself, you can just keep $1000 [edit: make that $3334, so as to have a higher expected value than a sure $500] and give the rest away before you have time to change your mind. Or put the whole million in an irrevocable trust. These aren't even the good ideas; they're just the trivial ones which are better than what you're suggesting.
Being aware of that tendency should make it possible to avoid ruination without forgoing the money entirely (e.g. by investing it wisely and not spending down the principal on any radical lifestyle changes, or even by giving all of it away to some worthy cause).
Well, I wouldn't rule out any of:
1) I and the AI are the only real optimization processes in the universe.
2) I-and-the-AI is the only real optimization process in the universe (but the AI half of this duo consistently makes better predictions than "I" do).
3) The concept of personal identity is unsalvageably confused.
If we have this incapability, what explains the abundant fiction in which nonhuman animals (both terrestrial and non) are capable of speech, and childhood anthropomorphization of animals?
That's not anthropomorphization.
Can you teach me to talk to the stray cat in my neighborhood?
Sorry, you're too old. Those childhood conversations you had with cats were real. You just started dismissing them as make-believe once your ability to doublethink was fully mature.
All of the really interesting stuff, from before you could doublethink at all, has been blocked out entirely by infantile amnesia.
I would believe that human cognition is much, much simpler than it feels from the inside -- that there are no deep algorithms, and it's all just cache lookups plus a handful of feedback loops which even a mere human programmer would call trivial.
I would believe that there's no way to define "sentience" (without resorting to something ridiculously post hoc) which includes humans but excludes most other mammals.
I would believe in solipsism.
I can hardly think of any political, economic, or moral assertion I'd regard as implausible, except that one of the world's extant religions is true (since that would have about as much internal consistency as "2 + 2 = 3").
The actual quote didn't contain the word "beat" at all. It was "Count be wrong, they fuck you up."
The fact that we find ourselves in a world which has not ended is not evidence.
lesswrong.com's web server is in the US but both of its nameservers are in Australia, leading to very slow lookups for me -- often slow enough that my resolver times out (and caches the failure).
I am my own DNS admin so I can work around this by forcing a cache flush when I need to, but I imagine this would be a more serious problem for people who rely on their ISPs' DNS servers.
Quite a bit is known about the neurology behind face recognition. No one understands the algorithm well enough to build a fusiform gyrus from scratch, but that doesn't mean the fact that there is an algorithm is mysterious.
Each post under http://lesswrong.com/user/yourname/hidden/ should have an Unhide link.
Thanks, it looks like I misremembered -- if they're now doing perfusion after neuroseparation then it's much more likely to be compatible with organ donation.
I've sent Alcor a question about this.
This is the only reason I haven't signed up.
What I want to do is sign up for neuropreservation and donate any organs and tissues from the neck down, but as far as I can tell that's not even remotely feasible. Alcor's procedure involves cooling the whole body to 0C and injecting the cryoprotectant before removing the head (and I can understand why perfusion would be a lot easier while the head is still attached). Also, I think it's doubtful that the cryonics team and the transplant team would coordinate with each other effectively, even if there were no technical obstacles.
Are we developing a new art of akrasia-fighting, or is this just repackaged garden-variety self-help?
Edit: I don't mean to disparage anyone's efforts to improve themselves. (My only objection to the field of "self-help" is that it's dominated by charlatans.) But there is an existing body of science here, and I fear that if we go down this road the Art of Rationality will turn into nothing more than amateur behavioral psychology.
If in 1660 you'd asked the first members of the Royal Society to list the ways in which natural philosophy had tangibly improved their lives, you probably wouldn't have gotten a very impressive list.
Looking over history, you would not have found any tendency for successful people to have made a formal study of natural philosophy.
Alcor says they have a >50% incidence of poor cases.
I strongly second the idea of using real science as a test. Jeffreyssai wouldn't be satisfied with feeding his students -- even the beginners -- artificial puzzles all day. Artificial puzzles are shallow.
It wouldn't even have to be historical science. Science is still young enough that there's a lot of low-hanging fruit. I don't think we have a shortage of scientific questions which are genuinely unanswered, but can be recognized as answerable in a moderate amount of time by a beginner or intermediate student.
There's a heuristic at work here which isn't completely unreasonable.
I buy $15 items on a daily basis. If I form a habit of ignoring a $5 savings on such purchases, I'll be wasting a significant fraction of my income. I buy $125 items rarely enough that I can give myself permission to splurge and avoid the drive across town.
The percentage does matter -- it's a proxy for the rate at which the savings add up.
It's also a proxy for the importance of the savings relative to other considerations, which are often proportional to the value of what you're buying. If you were about to sign the papers on a $20000 car purchase, would you walk away at the last minute if you found out that an identical car was available from another dealer for $19995? Would you try to explicitly weigh the $5 against intangibles such as your level of trust in the first dealer compared to the second, or would you be right to regard the $5 as a distraction and ignore it?