Posts

Comments

Comment by ocr-fork on Desirable Dispositions and Rational Actions · 2010-08-18T06:30:55.151Z · LW · GW

CODT (Cop Out Decision Theory) : In which you precommit to every beneficial precommitment.

Comment by ocr-fork on Desirable Dispositions and Rational Actions · 2010-08-18T06:27:16.394Z · LW · GW

I thought that debate was about free will.

Comment by ocr-fork on Desirable Dispositions and Rational Actions · 2010-08-18T06:22:28.454Z · LW · GW

This isn't obvious. In particular, note that your "obvious example" violates the basic assumption all these attempts at a decision theory are using, that the payoff depends only on your choice and not how you arrived at it.

Omega simulates you in a variety of scenarios. If you consistently make rational decisions he tortures you.

Comment by ocr-fork on Open Thread, August 2010-- part 2 · 2010-08-18T06:07:14.712Z · LW · GW

Since that summer in Colorado, Sam Harris, Richard Dawkins, Daniel Dennett, and Christopher Hitchens have all produced bestselling and highly controversial books—and I have read them all.

The bottom line is this: whenever we Christians slip into interpreting scripture literally, we belittle the Bible and dishonor God. Our best moral guidance comes from what God is revealing today through evidence, not from tradition or authority or old mythic stories.

The first sentence warns agains taking the Bible literally, but the next sentence insinuates that we don't even need it...

He's also written a book called "Thank God for Evolution," in which he sprays God all over science to make it more palatable to christians.

I dedicate this book to the glory of God. Not any "God" me may think about , speak about , believe in , or deny , but the one true God we all know and experience.

If he really is trying to deconvert people, I suspect it won't work. They won't take the final step from his pleasant , featureless god to no god, because the featureless one gives them a warm glow without any intellectual conflict.

Comment by ocr-fork on Open Thread, August 2010-- part 2 · 2010-08-18T05:38:32.162Z · LW · GW

How much more information is in the ontogenic environment, then?

Off the top of my head:

  1. The laws of physics

  2. 9 months in the womb

  3. The rest of your organs. (maybe)

  4. Your entire childhood...

These are barriers developing Kurzweil's simulator in the first place, NOT to implementing it in as few lines of code as possible. A brain simulating machine might easily fit in a million lines of code, and it could be written by 2020 if the singularity happens first, but not by involving actual proteins . That's idiotic.

Comment by ocr-fork on Open Thread, August 2010 · 2010-08-14T20:08:59.746Z · LW · GW

One of the facts about 'hard' AI, as is required for profitable NLP, is that the coders who developed it don't even understand completely how it works. If they did, it would just be a regular program.

TLDR: this definitely is emergent behavior - it is information passing between black-box algorithms with motivations that even the original programmers cannot make definitive statements about.

Yuck.

Comment by ocr-fork on Open Thread: July 2010, Part 2 · 2010-07-31T00:57:05.582Z · LW · GW

The first two questions aren't about decisions.

"I live in a perfectly simulated matrix"?

This question is meaningless. It's equivalent to "There is a God, but he's unreachable and he never does anything."

Comment by ocr-fork on Metaphilosophical Mysteries · 2010-07-30T01:28:34.685Z · LW · GW

it might take O(2^n) more bits to describe BB(2^n+1) as well, but I wasn't sure so I used BB(2^(n+1)) in my example instead.

You can find it by emulating the Busy Beaver.

Comment by ocr-fork on Metaphilosophical Mysteries · 2010-07-30T00:23:53.234Z · LW · GW

Oh.

I feel stupid now.

EDIT: Wouldn't it also break even by predicting the next Busy Beaver number? "All 1's except for BB(1...2^n+1)" is also only slightly less likely. EDIT: I feel more stupid.

Comment by ocr-fork on Open Thread: July 2010 · 2010-07-29T23:35:30.401Z · LW · GW

Quite right. Suicide rates spike in adolescence, go down, and only spike again in old age, don't they? Suicide is, I think, a good indicator that someone is having a bad life.

Suicide rates start at .5 in 100,000 for ages 5-14 and rise to about 15 in 100,000 for seniors.

Comment by ocr-fork on Metaphilosophical Mysteries · 2010-07-29T23:12:49.763Z · LW · GW

What about the agent using Solomonoff's distribution? After seeing BB(1),...,BB(2^n), the algorithmic complexity of BB(1),...,BB(2^n) is sunk, so to speak. It will predict a higher expected payoff for playing 0 in any round i where the conditional complexity K(i | BB(1),...,BB(2^n)) < 100. This includes for example 2BB(2^n), 2BB(2^n)+1, BB(2^n)^2 * 3 + 4, BB(2^n)^^^3, etc. It will bet on 0 in these rounds (erroneously, since K(BB(2^(n+1)) | BB(2^n)) > 100 for large n), and therefore lose relative to a human.

I don't understand how the bolded part follows. The best explanation by round BB(2^n) would be "All 1's except for the Busy Beaver numbers up to 2^n", right?

Comment by ocr-fork on Metaphilosophical Mysteries · 2010-07-29T22:14:28.737Z · LW · GW

Right, and...

A trivial but noteworthy fact is that every finite sequence of Σ values, such as Σ(0), Σ(1), Σ(2), ..., Σ(n) for any given n, is computable, even though the infinite sequence Σ is not computable (see computable function examples.

So why can't the universal prior use it?

Comment by ocr-fork on Metaphilosophical Mysteries · 2010-07-29T22:02:50.045Z · LW · GW

Umm... what about my argument that a human can represent their predictions symbolically like "P(next bit is 1)=i-th bit of BB(100)" instead of using numerals, and thereby do better than a Solomonoff predictor because the Solomonoff predictor can't incorporate this?

BB(100) is computable. Am I missing something?

Comment by ocr-fork on Open Thread: July 2010, Part 2 · 2010-07-29T16:43:10.668Z · LW · GW

But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.

I've read the post. That excuse is actually relevant.

Comment by ocr-fork on Metaphilosophical Mysteries · 2010-07-27T22:58:04.315Z · LW · GW

To cite one field that I'm especially familiar with, consider probability and decision theory, where we went from having no concept of probability, to studies involving gambles and expected value, to subjective probability, Bayesian updating, expected utility maximization, and the Turing-machine-based universal prior, to the recent realizations that EU maximization with Bayesian updating and the universal prior are both likely to be wrong or incomplete.

I don't see how bayesian utility maximizers lack the "philosophical abilities" to discover these ideas. Also, the last one is only half true. The "wrong" link is about decision theory paradoxes, but a bayesian utility maximizer would overcome these with practice.

Comment by ocr-fork on Book Review: The Root of Thought · 2010-07-23T23:06:37.854Z · LW · GW

astrocytes can fill with calcium either because of external stimuli or when their own calcium stores randomly leak out into the cell, a process which resembles the random, unprovoked nature of anything that's random.

Comment by ocr-fork on Unknown knowns: Why did you choose to be monogamous? · 2010-06-26T04:41:56.539Z · LW · GW

But this taxonomy (as originally described) omits an important fourth category: unknown knowns, the things we don't know that we know. This category encompasses the knowledge of many of our own personal beliefs, what I call unquestioned defaults.

Does anyone else feel like this just a weird remake of cached thoughts?

Comment by ocr-fork on Is cryonics necessary?: Writing yourself into the future · 2010-06-24T16:35:24.367Z · LW · GW

They remember being themselves, so they'd say "yes."

I think the OP thinks being cryogenically frozen is like taking a long nap, and being reconstructed from your writings is like being replaced. This is true, but only because the reconstruction would be very inaccurate, not because a lump of cold fat in a jar is intrinsically more conscious than a book. A perfect reconstruction would be just as good as being frozen. When I asked if a vitrified brain was conscious I meant "why do you think a vitrified brain is conscious if a book isn't."

Comment by ocr-fork on Is cryonics necessary?: Writing yourself into the future · 2010-06-24T15:58:28.896Z · LW · GW

Your surviving friends would find it extremely creepy and frustrating. Nobody would want to bring you back.

Comment by ocr-fork on Is cryonics necessary?: Writing yourself into the future · 2010-06-24T06:05:06.331Z · LW · GW

There's a lot of stuff about me available online, and if you add non-public information like the contents of my hard drive with many years worth of IRC and IM logs, an intelligent enough entity should be able to produce a relatively good reconstruction.

That's orders of magnitude less than the information content of your brain. The reconstructed version would be like an identical twin leading his own life who coincidentally reenacts your IRC chats and reads your books.

Comment by ocr-fork on Is cryonics necessary?: Writing yourself into the future · 2010-06-24T04:37:09.098Z · LW · GW

Is a vitrified brain conscious?

Comment by ocr-fork on Is cryonics necessary?: Writing yourself into the future · 2010-06-24T01:20:32.604Z · LW · GW

Depending on how you present it you can potentially get people to keep these kinds of writings even if they don't believe it will extend their lives in any meaningful way,

Writing isn't feasible, but lifelogging might be. (see gwern's thread). The government could hand out wearable cameras that double as driving licenses, credit cards, etc. If anyone objects all they have to do rip out the right wires.

Comment by ocr-fork on Open Thread June 2010, Part 2 · 2010-06-08T20:32:24.520Z · LW · GW

In fact it's not even obvious to me that a stack of worlds would "converge": it could hit an attractor with period N where N>1, or do something even more funky.

I'm convinced it would never converge, and even if it did I would expect it to converge on something more interesting and elegant, like a cellular automata. I have no idea what a binary tree system would do unless none of the worlds break the symmetry between A and B. In that case it would behave like a stack, and the story assumes stacks can converge.

Comment by ocr-fork on Open Thread June 2010, Part 2 · 2010-06-08T18:48:29.632Z · LW · GW

I'm really confused now. Also I haven't read Permutation City...

Just because one deterministic world will always end up simulating another does not mean there is only one possible world that would end up simulating that world.

Comment by ocr-fork on Open Thread June 2010, Part 2 · 2010-06-08T18:25:05.173Z · LW · GW

Level 558 runs the simulation and makes a cube in Level 559. Meanwhile, Level 557 makes the same cube in 558. Level 558 runs Level 559 to it's conclusion. Level 557 will seem frozen in relation to 558 because they are busy running 558 to it's conclusion. Level 557 will stay frozen until 558 dies.

558 makes a fresh simulation of 559. 559 creates 560 and makes a cube. But 558 is not at the same point in time as 559, so 558 won't mirror the new 559's actions. For example, they might be too lazy to make another cube. New 559 diverges from old 559. Old 559 ran 560 to it's conclusion, just like 558 ran them to their conclusion, but new 559 might decide to do something different to new 560. 560 also diverges.. Keep in mind that every level can see and control every lower level, not just the next one. Also, 557 and everything above is still frozen.

So that's why restarting the simulation shouldn't work.

But what if two groups had built such computers independently? The story is making less and less sense to me.

Then instead of a stack, you have a binary tree.

Your level runs two simulations, A and B. A-World contains its own copies of A and B, as does B-world. You create a cube in A-World and a cube appears in you world. Now you know you are an A-world. You can use similar techniques to discover that you are an A-World inside a B-World inside another B-World.... The worlds start to diverge as soon as they build up their identities. Unless you can convince all of them to stop differentiating themselves and cooperate, everybody will probably end up killing each other.

You can avoid this by always doing the same thing to A and B. Then everything behaves like an ordinary stack.

Comment by ocr-fork on Open Thread June 2010, Part 2 · 2010-06-08T16:14:33.893Z · LW · GW

Until they turned it on, they thought it was the only layer.

Comment by ocr-fork on Open Thread June 2010, Part 2 · 2010-06-08T14:57:59.253Z · LW · GW

That doesn't say anything about the top layer.

Comment by ocr-fork on Open Thread June 2010, Part 2 · 2010-06-08T01:32:09.670Z · LW · GW

Then it would be someone else's reality, not theirs. They can't be inside two simulations at once.

Comment by ocr-fork on Open Thread June 2010, Part 2 · 2010-06-08T00:48:16.734Z · LW · GW

Then they miss their chance to control reality. They could make a shield out of black cubes.

Comment by ocr-fork on Open Thread June 2010, Part 2 · 2010-06-08T00:32:58.163Z · LW · GW

Why do you think deterministic worlds can only spawn simulations of themselves?

Comment by ocr-fork on Open Thread June 2010, Part 2 · 2010-06-07T16:49:52.557Z · LW · GW

Of course. It's fiction.

Comment by ocr-fork on Open Thread June 2010, Part 2 · 2010-06-07T16:45:33.717Z · LW · GW

With 1), you're non-cooperator and the punisher is society in general. With 2), you play both roles at different times.

Comment by ocr-fork on Open Thread June 2010, Part 2 · 2010-06-07T16:16:09.453Z · LW · GW

First, the notion that a quantum computer would have infinite processing capability is incorrect... Second, if our understanding of quantum mechanics is correct

It isn't. They can simulate a world where quantum computers have infinite power because because they live in a world where quantum computers have infinite power because...

Comment by ocr-fork on Open Thread: June 2010 · 2010-06-03T01:39:46.227Z · LW · GW

Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos.

I winced.

Comment by ocr-fork on Diseased thinking: dissolving questions about disease · 2010-05-31T22:26:02.997Z · LW · GW

How is what is proposed above different from imprisoning these groups?

It's not different. Vladmir is arguing that if you agree with the article, you should also support preemptive imprisonment.

Comment by ocr-fork on Diseased thinking: dissolving questions about disease · 2010-05-31T22:02:14.805Z · LW · GW

Regret doesn't cure STDs.

Comment by ocr-fork on Diseased thinking: dissolving questions about disease · 2010-05-31T16:38:45.828Z · LW · GW

How much of a statistical correlation would you require?

Enough to justify imprisoning everyone. It depends on how long they'd stay in jail, the magnitude of the crime, etc.

I really don't care what Ben Franklin thinks.

Comment by ocr-fork on On Less Wrong traffic and new users -- and how you can help · 2010-05-31T14:47:03.513Z · LW · GW

The search engines have their own incentives to avoid punishing innocent sites.

Comment by ocr-fork on On Less Wrong traffic and new users -- and how you can help · 2010-05-31T09:43:41.816Z · LW · GW

If you're trying to outpaperclip SEO-paperclippers you'll need a lot better than that.

I doubt LessWrong has any competitors serious enough for SEO.

Yudkowsky.net comes up as #5 on the "rationality" search, and being surrounded by uglier sites it should stand out to anyone who looks past Wikipedia. But LessWrong is only mentioned twice, and not on the twelve virtues page that new users will see first. I think you could snag a lot of people with a third mention on that page, or maybe even a bright green logo-button.

Comment by ocr-fork on Significance of Compression Rate Method · 2010-05-31T07:36:05.558Z · LW · GW

First, infer the existence of people, emotions, stock traders, the press, factories, production costs, and companies. When that's done your theory should follow trivially from the source code of your compression algorithm. Just make sure your computer doesn't decay into dust before it gets that far.

Comment by ocr-fork on Open Thread: May 2010, Part 2 · 2010-05-31T07:15:12.599Z · LW · GW

Sell patents.

(or more specifically, patent your invention and wait until someone else wants to use it. If this seems unethical, remember you will usually be blocking big evil corporations, not other inventors, and that the big evil corporations would always do the same thing to you if they could.)

Comment by ocr-fork on Diseased thinking: dissolving questions about disease · 2010-05-31T06:01:55.970Z · LW · GW

If this institution is totally honest, and extremely accurate in making predictions, so that obeying the laws it enforces is like one-boxing in Newcomb's problem, and somehow an institution with this predictive power has no better option than imprisonment, then yes I would be OK with it.

Please see the edit I just added to the post; it seems like my wording wasn't precise enough. I had in mind statistical treatment of large groups, not prediction of behavior on an individual basis (which I assume is the point of your analogy with Newcomb's problem).

I would also be ok with this... however by your own definition it would never happen in practice, except for extreme cases like cults or a rage virus that only infects redheads.

Comment by ocr-fork on Multiple Choice · 2010-05-18T00:59:54.635Z · LW · GW

I don't think you can work towards not being offended. {according to my very narrow definition, which I now retract} It's just a gut reaction.

Comment by ocr-fork on Multiple Choice · 2010-05-18T00:14:32.013Z · LW · GW

Conversations with foreigners?

Comment by ocr-fork on Preface to a Proposal for a New Mode of Inquiry · 2010-05-18T00:09:14.415Z · LW · GW

Let's all take some deep breaths.

I sense this thread has crossed a threshold, beyond which questions and criticisms will multiply faster than they can be answered.

Comment by ocr-fork on Preface to a Proposal for a New Mode of Inquiry · 2010-05-17T23:42:06.324Z · LW · GW

Which will be soon, right?

Comment by ocr-fork on Multiple Choice · 2010-05-17T23:21:37.777Z · LW · GW

edit: here's an example:

If you're risk-neutral, you still can't just do whatever has the highest chance of being right; you must also consider the cost of being wrong. You will probably win a bet that says a fair six-sided die will come up on a number greater than 2. But you shouldn't buy this bet for a dollar if the payoff is only $1.10, even though that purchase can be summarized as "you will probably gain ten cents". That bet is better than a similarly-priced, similarly-paid bet on the opposite outcome; but it's not good.

You have a 1/3 chance of losing a dollar and a 2/3 chance of gaining ten cents. On average, you will lose 13 (edit for, um, the fifth time: 26) cents per dollar. Unless you need that dime to buy a ticket for the last plane leaving your doomed volcanic island home... it's a bad bet.

Also see: applause lights

Comment by ocr-fork on Preface to a Proposal for a New Mode of Inquiry · 2010-05-17T21:44:58.238Z · LW · GW

I don't get it. Are you saying a smart, dangerous AI can't be simple and predictable? Differential equations are made of algebra, so did she mean the task is impossible? You were replying to my post, right?

Comment by ocr-fork on Preface to a Proposal for a New Mode of Inquiry · 2010-05-17T13:39:02.697Z · LW · GW

An AI that acts like people? I wouldn't buy that. It sounds creepy. Like Clippy with a soul.

Comment by ocr-fork on Preface to a Proposal for a New Mode of Inquiry · 2010-05-17T13:03:23.707Z · LW · GW

What else is there to see besides humans?