Open thread, August 28 - September 3, 2017

post by Thomas · 2017-08-28T06:11:19.159Z · LW · GW · Legacy · 71 comments

Contents

71 comments
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

71 comments

Comments sorted by top scores.

comment by Elo · 2017-08-29T23:27:05.824Z · LW(p) · GW(p)

Hamming question: if your life were a movie and you were watching your life on screen, what would you be yelling at the main character? (example: don't go in the woods alone! Hurry up and see the quest guy! Just drop the sunk costs and do X) (optional - answer in public or private)

Replies from: Screwtape
comment by Error · 2017-08-28T17:38:09.244Z · LW(p) · GW(p)

I'm looking for an anecdote about sunk costs. Two executives were discussing some bad business situation, one of them asks "look, suppose the board were to fire us and bring new execs in. What would those guys do?" "Get us out of the X business" "Then what's to stop us from leaving the room, coming back in, and doing exactly that?"

...but all my google-fu can't turn up the original source. Does it sound familiar to anyone here?

Replies from: Unnamed
comment by Unnamed · 2017-08-28T18:42:43.172Z · LW(p) · GW(p)

Intel, 1985.

Grove says he and Moore were in his cubicle, "sitting around ... looking out the window, very sad." Then Grove asked Moore a question.

"What would happen if somebody took us over, got rid of us — what would the new guy do?" he said.

"Get out of the memory business," Moore answered.

Grove agreed. And he suggested that they be the ones to get Intel out of the memory business.

Replies from: Error
comment by Error · 2017-08-28T20:12:17.183Z · LW(p) · GW(p)

Thanks, that's the one.

comment by cousin_it · 2017-08-31T10:08:19.621Z · LW(p) · GW(p)

It seems to me that there's no difference in kind between moral intuitions and religious beliefs, except that the former are more deeply held. (I guess that makes me a kind of error theorist.)

If that's true, that means FAI designers shouldn't work on approaches like "extrapolation" that can convert a religious person to an atheist, because the same procedure might convert you into a moral nihilist. The task of FAI designers is more subtle: devise an algorithm that, when applied to religious belief, would encode it "faithfully" as a utility function, despite the absence of God.

Does that sound right? I've never seen it spelled out as strongly, but logically it seems inevitable.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2017-08-31T12:06:49.176Z · LW(p) · GW(p)

It seems to me that there's no difference in kind between moral intuitions and religious beliefs,

That just doesn't seem true to me. I agree that there's often difference between religious beliefs and ordinary factual beliefs, but I don't think that religious beliefs are the same sort of thing as moral intuitions. They just feel different to me.

For one thing religious beliefs are often a "belief in belief" whereas I don't think moral beliefs are like that.

Also moral beliefs seem more instinctual, whereas religious beliefs are taught.

Replies from: entirelyuseless, cousin_it
comment by entirelyuseless · 2017-09-01T02:15:00.900Z · LW(p) · GW(p)

For one thing religious beliefs are often a "belief in belief" whereas I don't think moral beliefs are like that.

I think moral beliefs are very often like that, at least for some people. See the comment here and JM's response.

Stephen Diamond makes a related argument, namely that people will not give up moral beliefs because it is obviously wicked to do so, according to those very same moral beliefs, in the same way that a religious person will not give up their religious beliefs because those beliefs say it would be wicked to do so.

comment by cousin_it · 2017-08-31T12:28:23.823Z · LW(p) · GW(p)

Every emotion connected with moral intuitions, e.g. recoiling from a bad act, can also happen due to religious beliefs.

comment by morganism · 2017-08-30T02:57:42.011Z · LW(p) · GW(p)

Low-fat diet could kill you, major study shows (Lancet Canadian study of 135,000 adults )

http://www.telegraph.co.uk/news/2017/08/29/low-fat-diet-linked-higher-death-rates-major-lancet-study-finds/amp/

"those with low intake of saturated fat raised chances of early death by 13 per cent compared to those eating plenty.

And consuming high levels of all fats cut mortality by up to 23 per cent."

“Higher intake of fats, including saturated fats, are associated with lower risk of mortality.”

“Our data suggests that low fat diets put populations at increased risk for cardiovascular disease."

comment by halcyon · 2017-08-29T20:23:17.106Z · LW(p) · GW(p)

Integrals sum over infinitely small values. Is it possible to multiply infinitely small factors? For example, Integration of some random dx is a constant, since infinitely many infinitely small values can sum up to any constant. But can you do something along the lines of taking an infinitely large root of a constant, and get an infinitesimal differential in that way? Multiplying those differentials will yield some constant again.

My off the cuff impression is that this probably won't lead to genuinely new math. In the most basic case, all it does is move the integrations into the powers that other stuff is raised by. But if we somehow end up with complicated patterns of logarithms and exponentiations, like if that other stuff itself involves calculus and so on, then who knows? Is there a standard name for this operation?

Replies from: Manfred, Oscar_Cunningham, cousin_it, Thomas
comment by Manfred · 2017-08-29T21:41:37.991Z · LW(p) · GW(p)

What is the analogy of sum that you're thinking about? Ignoring how the little pieces are defined, what would be a cool way to combine them? For example, you can take the product of a series of numbers to get any number, that's pretty cool. And then you can convert a series to a continuous function by taking a limit, just like an integral, except rather than the limit going to really small pieces, the limit goes to pieces really close to 1.

You could also raise a base to a series of powers to get any number, then take that to a continuous limit to get an integral-analogue. Or do other operations in series, but I can't think of any really motivating ones right now.

Can you invert these to get derivative-analogues (wiki page)? For the product integral, the value of the corresponding derivative turns out to be the limit of more and more extreme inverse roots, as you bring the ratio of two points close to 1.

Are there any other interesting derivative-analogues? What if you took the inverse of the difference between points, but then took a larger and larger root? Hmm... You'd get something that was 1 almost everywhere for nice functions, except where the function's slope got super-polynomially flat or super-polynomially steep.

Replies from: halcyon
comment by halcyon · 2017-08-31T12:33:21.333Z · LW(p) · GW(p)

Someone has probably thought of this already, but if we defined an integration analogue where larger and larger logarithmic sums cause their exponentiated, etc. value to approach 1 rather than infinity, then we could use it to define a really cool account of logical metaphysics: Each possible state of affairs has an infinitesimal probability, there are infinitely many of them, and their probabilities sum to 1. This probably won't be exhaustive in some absolute sense, since no formal system is both consistent and complete, but if we define states of affairs as formulas in some consistent language, then why not? We can then assign various differential formulas to different classes of states of affairs.

(That is the context in which this came up. The specific situation is more technically convoluted.)

comment by Oscar_Cunningham · 2017-08-29T22:57:17.679Z · LW(p) · GW(p)

Good question!

The answer is called a Product integral. You basically just use the property

log(ab) = log(a) + log(b)

to turn your product integral into a normal integral

product integral of f(x) = e ^ [normal integral of log(f(x))]

Replies from: halcyon, Thomas
comment by halcyon · 2017-08-31T12:47:10.701Z · LW(p) · GW(p)

Thanks, product integral is what I was talking about. The exponentiated integral is what I meant when I said the integration will move into the power term.

comment by Thomas · 2017-08-31T09:42:32.370Z · LW(p) · GW(p)

I think that was not his question. Hi didn't ask about product integral of f(x), but "product integral of x".

EDIT: And that for "small x". At least I understood his question so.

Replies from: halcyon
comment by halcyon · 2017-08-31T12:58:35.159Z · LW(p) · GW(p)

No, he's right. I didn't think to clarify that my infinitely small factors are infinitesimally larger than 1, not 0. See the Type II product integral formula on Wikipedia that uses 1 + f(x).dx.

comment by cousin_it · 2017-08-29T22:34:08.634Z · LW(p) · GW(p)

Sum : integral of f(x) :: product :: exp(integral of log(f(x)))

comment by Thomas · 2017-08-29T21:39:32.375Z · LW(p) · GW(p)

I am afraid, that multiplication of even countably many small numbers yields 0. Let alone the product of more than that, what your integration analogous operation would be,

You can get a nonzero product if the sum of differences between 1 and your factors converge. Then and only then. But if all the factors are smaller than say 0.9 ... you get 0.

Except if you can find some creative way to that anyway. Might be possible, I don't know.

Replies from: halcyon
comment by halcyon · 2017-08-31T13:08:41.635Z · LW(p) · GW(p)

Yeah, it might have helped to clarify that the infinitesimal factors I had in mind are not infinitely small as numbers from the standpoint of addition. Since the factor that makes no change to the product is 1 rather than 0, "infinitely small" factors must be infinitesimally greater than 1, not 0. In particular, I was talking about a Type II product integral with the formula pi(1 + f(x).dx). If f(x) = 1, then we get e^sigma(1.dx) = e^constant = constant, right?

Replies from: Thomas
comment by Thomas · 2017-08-31T13:31:47.036Z · LW(p) · GW(p)

Right. There around 1 you often can actually multiply an infinite number of factors and get some finite result.

comment by Thomas · 2017-08-28T06:12:31.800Z · LW(p) · GW(p)

There is a problem ...

Replies from: cousin_it, Dagon, WalterL
comment by cousin_it · 2017-08-28T11:01:23.480Z · LW(p) · GW(p)

The best strategy is to always say "it's the first time". (Or, equivalently, always say "it's the second time", etc.)

Replies from: Thomas
comment by Thomas · 2017-08-28T11:16:27.167Z · LW(p) · GW(p)

No. If that damn dungeon master hadn't tossed that fair coin himself first, then it would be the best strategy to say "It's my first time here" - and you are free.

But it may very well be, that he tossed heads up before and put you right back to sleep with amnesia induced. In that case, you are never out.

Replies from: cousin_it
comment by cousin_it · 2017-08-28T11:34:27.235Z · LW(p) · GW(p)

My strategy gives probability 1/2 of escaping. Can you show some strategy that gives higher probability? Doesn't have to be the best.

Replies from: Thomas
comment by Thomas · 2017-08-28T11:47:00.357Z · LW(p) · GW(p)

If you always say "It's my first time" you will be freed with the probability 1/2, yes.

I'll give the best strategy I know before the end of this week. Now, it would be a spoiler.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2017-08-28T17:20:01.883Z · LW(p) · GW(p)

Let p_n be the probability that I say n. Then the probability I escape on exactly the nth round is at most p_n/2 since the coin has to come up on the correct side, and then I have to say n. In fact the probability is normally less than that since there is a possibility that I have already escaped. So the probability I escape is at most the sum over n of p_n/2. Since p_n is a probability distribution it sums to 1, so this if at most 1/2. I'll escape with probability less than this is I have any two p_n nonzero. So the optimal strategies are precisely to always say the same number, and this can be any number.

Replies from: Unnamed, Thomas
comment by Unnamed · 2017-08-31T22:14:09.892Z · LW(p) · GW(p)

I got the same answer, with essentially the same reasoning.

Assuming that each guess is a draw from the same probability distribution over positive integers, the expected number of correct guesses is 0.5 if I keep guessing forever (rather than leaving after 1 correct guess), regardless of what distribution I choose.

So the probability of getting at least one correct guess (which is the win condition) is capped at 0.5. And the only way to hit that maximum is by removing all the scenarios where I guess correctly more than once, so that all of the expected value comes from the scenarios where I guess correctly exactly once.

comment by Thomas · 2017-08-31T12:53:38.341Z · LW(p) · GW(p)

Define flip values as H=0 and T=1. You have to flip this fair coin twice. You increase x=x+value(1) and y=y+value(2) and z=z+1. If x>y you stop flipping and declare - It's the z-th round of the game.

For example, after TH, x=1 and y=0 and z=1. You stop tossing and declare 1st round. If it is HH, you continue tossing it twice again.

No matter how late in the game you are, you have a nonzero probability to win. Chebyshev (and Chernoff) can help you improve the x>y condition a bit. I don't know how much yet. Neither I have a proof that then the probability of exiting is > 1/2. But at least that much it is. Some Monte-Carloing seems to agree.

Replies from: Dagon, Oscar_Cunningham, Dagon
comment by Dagon · 2017-08-31T23:57:02.585Z · LW(p) · GW(p)

Would you mind showing your work on monte-carlo for this? If you've tried more than a few runs and they all actually terminated, you have a bug.

You're describing a random-walk that moves left 25% of the time, right 25% and does not move 50% of the time, and counting steps until you get to 1. There is no possibility that this ends up better than 50% to exit after the same number of steps as 0.50^n.

Replies from: Thomas
comment by Thomas · 2017-09-01T07:57:32.107Z · LW(p) · GW(p)

1._round_________________1____________________8______0.125

2._round________________15___________________64______0.234

3._round_______________164__________________512______0.32

4._round_____________1,585________________4,096______0.387

5._round____________14,392_______________32,768______0.439

6._round___________126,070______________262,144______0.481

7._round_________1,079,808____________2,097,152______0.515

8._round_________9,111,813___________16,777,216______0.543

9._round________76,095,176__________134,217,728______0.567

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2017-09-01T09:57:53.785Z · LW(p) · GW(p)

I think you must just have an error in your code somewhere. Consider going round 3. Let the probability you say "3" be p_3. Then according to your numbers

164/512 = 15/64 + (1 - 15/64)*(1/2)*p_3

Since the probability of escaping by round 3 is the probability of escape by round 2, plus the probability you don't escape by round 2, multiplied by the probability the coin lands tails, multiplied by the probability you say "3".

But then p_3 = 11/49, and 49 is not a power of two!

Replies from: Thomas
comment by Thomas · 2017-09-01T12:46:41.994Z · LW(p) · GW(p)

Say, that SB has only 10 tries to escape.

The DM (Dungeon Master) tosses his 10 coins and SB tosses her 20 coins, even before the game begins.

There are 2^30, which is about a billion possible outputs. More than half of them grants her freedom.

We compute her exit by - At the earliest x>y condition in each output bit string, the DM has also the freeing coin toss.

comment by Oscar_Cunningham · 2017-08-31T18:00:01.327Z · LW(p) · GW(p)

Based on some heuristic calculations I did, it seems that the probability of escape with this plan is exactly 4/10.

Replies from: Thomas
comment by Thomas · 2017-08-31T19:02:05.273Z · LW(p) · GW(p)

Interesting. Do you agree that every number is reached by the z function defined above, infinite number of times?

And yet, every single time z != sleeping_round? In the 60 percent of this Sleeping Beauty imprisonments?

Even if the condition x>y is replaced by something like x>y+sqrt(y) or whatever formula, you can't go above 50%?

Extraordinary. Might be possible, though.

You clearly have a function N->N where eventually every natural number is a value of this function f, but f(n)!=n for all n.

That would be easier if it would be f(n)>>n almost always. But sometimes is bigger, sometimes is smaller.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2017-08-31T20:08:18.999Z · LW(p) · GW(p)

Do you agree that every number is reached by the z function defined above, infinite number of times?

Yes, definitely.

Even if the condition x>y is replaced by something like x>y+sqrt(y) or whatever formula, you can't go above 50%?

Yes. I proved it.

You clearly have a function N->N where eventually every natural number is a value of this function f, but f(n)!=n for all n.

Well, on average we have f(n)=n for one n, but there's a 50% chance the guy won't ask us on that round.

comment by Dagon · 2017-08-31T16:12:44.843Z · LW(p) · GW(p)

There are two pretty strong sketches above here that this approaches 1/2 as you get closer to any static answer, but cannot beat 1/2.

The best answer is "ignore the coin, declare first". There is no better chance of escape (though there are many ties at 1/2), and this minimizes your time in purgatory in the case that you do escape.

Replies from: Thomas
comment by Thomas · 2017-08-31T16:47:30.898Z · LW(p) · GW(p)

So, you say, the Sleeping Beauty is there forever with the probability of at least 1/2.

Then she has all the time in the world, to exercise this function which outputs z. Do you agree, that every natural number will be eventually reached by this algorithm, counting the double tossings, adding 0 or 1 to x and 0 or 1 to y and increasing z, until x>y?

Agree?

Replies from: Dagon
comment by Dagon · 2017-08-31T18:43:56.581Z · LW(p) · GW(p)

She has all the time in the world, but only as much probability as she gave up by not saying "first".

Every natural number is reachable by your algorithm, but the probability that it's reached ON THE SAME ITERATION as the wake-up schedule converges to zero pretty quickly. Both iterations and her responses approach infinity, the product of the probabilities approach zero way faster than the probabilities themselves.

Really. Go to Wolfram and calculate "sum to infinity 0.5^n * 0.5^n". The chance that the current wake-up is N is clearly 0.5^n - 50% chance of T, 25% of HT, 12.5 of HHT, etc. If your distribution is different, replace the second 0.5^n with any formula such that "sum to infinity YOUR_FORMULA" is 1. It's 0.3333 that she'll EVER escape if she randomizes across infinite possibilities with the same distribution, and gets closer to (but doesn't reach) 50% if she front-weights the distribution.

Replies from: Thomas
comment by Thomas · 2017-08-31T19:21:20.542Z · LW(p) · GW(p)

Your basic idea is right here. But ... this product isn't that straightforward.

Say it's the 100th session. It is a lot of ways, that x becomes greater than y exactly this time. Especially if the formula is x>y+sqr(y) or something alike from the Chebyshev's arsenal.

If the session is then 101st, this new small probability isn't much smaller than it was in the 100th session.

Still, you may be right that the product (1-p_n)*(1-p_n+1) ... converges to 1/2 at the most.

Well, I doubt it.

Replies from: Dagon
comment by Dagon · 2017-08-31T20:04:46.339Z · LW(p) · GW(p)

Don't doubt it, do the math (and http://www.wolframalpha.com/ helps a LOT with this. Provide any formula for probability of guessing "Nth wakeup" such that it sums to 1 (or less) from 1 to infinity. Calculate the sum from 1 to infinity of the product of this and the 0.5^n chance that you're currently on the Nth wakeup.

You will never find one that sums to better than 0.5.

Your weirdness using X and Y is not helping - any algorithm you can state eventually comes out to "some probability for each N of guessing N". And when you view it that way, you'll see that the sum has to be less than 50%.

comment by Dagon · 2017-08-28T18:47:18.054Z · LW(p) · GW(p)

Simplest is always "this is wakening #1". 50% chance of escape, and soonest possible if it happens. Has the psychological disadvantage that if the first coin is tails, you're stuck forever with no future chance of escape. You have no memory of any of them, so that is irrelevant - all that matters is the probability of escape - but it feels bad to us as outside observers.

You can stretch it out by randomizing your answer with a declining chance of higher numbers. Say, flip YOUR coin until you get tails, then guess the number of flips you've made. HHHT you guess 4, for example. This gives you 25% to be released on day 1 (50% DM's coin is tails X 50% your first flip is.). And 6.5% to be released the second day (25% of HT on his coin and yours). Unfortunately, sum (i: 1->infinity) 0.25^i = 0.33333, so your overall chance of escaping is reduced. But you do always have some (very small) hope, unlike the simple answer.

Randomization with weighting toward earlier numbers improves your early chance, but reduces your later chances, and it seems (from sampling and thinking, not proven) that it can approach 0.5 but not exceed it.

I think the best you can do is 50% unless you have some information when you wake up about how long this has been going on.

comment by WalterL · 2017-08-28T13:32:45.470Z · LW(p) · GW(p)

Is it possible to pass information between awakenings? Use coin to scratch floor or something?

Replies from: Thomas
comment by Thomas · 2017-08-28T13:35:36.926Z · LW(p) · GW(p)

No, that is not possible.

Replies from: WalterL
comment by WalterL · 2017-08-28T13:43:07.406Z · LW(p) · GW(p)

So you only get one choice, since you will make the same one every time. I guess for simplicity choose 'first', but any number has same chance.

Replies from: Thomas
comment by Thomas · 2017-08-28T14:02:59.405Z · LW(p) · GW(p)

Can you do worse than that?

Replies from: WalterL
comment by WalterL · 2017-08-28T14:21:25.508Z · LW(p) · GW(p)

Sure, you can guess zero or negative numbers or whatever.

Replies from: Thomas
comment by Thomas · 2017-08-28T14:24:48.584Z · LW(p) · GW(p)

Say, you must always give a positive number. Can you do worse than 1/2 then?

Replies from: WalterL, cousin_it
comment by WalterL · 2017-08-28T15:30:42.405Z · LW(p) · GW(p)

No. You will always say the same number each time, since you are identical each time.

As long as it isn't that number, you are going another round. Eventually it gets to that number, whereupon you go free if you get the luck of the coin, or go back under if you miss it.

Replies from: Thomas
comment by Thomas · 2017-08-28T15:33:50.612Z · LW(p) · GW(p)

You will always say the same number each time, since you are identical each time.

That's why you get a fair coin. Like a program, which gets seed for its random number generator from the clock.

Replies from: WalterL
comment by WalterL · 2017-08-28T20:31:09.243Z · LW(p) · GW(p)

Coin doesn't help. Say I decide to pick 2 if it is heads, 1 if it is tails.

I've lowered my odds of escaping on try 1 to 1/4, which initially looks good, but the overall chance stays the same, since I get another 1/4 on the second round. If I do 2 flips, and use the 4 spread there to get 1, 2, 3, or 4, then I have an eight of a chance on each of rounds 1-4.

Similarly, if I raise the number of outcomes that point to one number, that round's chance goes up , but the others decline, so my overall chance stays pegged to 1/2. (ie, if HH, HT, TH all make me say 1, then I have a 3/8 chance that round, but only a 1/8 of being awake on round 2 and getting TT).

Replies from: Thomas
comment by Thomas · 2017-08-28T20:58:57.450Z · LW(p) · GW(p)

The coin can at least lower your chances. Say, that you will say 3 if it is head and 4 if it is the tail.

You can win at round 3 with the probability 1/4 and you can win at round 4 with the probability 1/4.

Is that right?

Replies from: WalterL
comment by WalterL · 2017-08-29T02:39:59.762Z · LW(p) · GW(p)

Oh, yeah, I see what you are saying. Having 2 1/4 chances is, what, 7/16 of escape, so the coin does make it worse.

Replies from: Thomas
comment by Thomas · 2017-08-29T06:56:59.608Z · LW(p) · GW(p)

Sure. But not only to 7/16 but to the infinite number of other values, too. You just have to play with it longer.

The question now is, can the coin make it better, too? If not, why it can only make it worse?

Replies from: Gurkenglas
comment by Gurkenglas · 2017-08-29T19:12:01.096Z · LW(p) · GW(p)

If you say two numbers with nonzero probability, you can improve your chances by shifting all the probability mass to one of them.

comment by cousin_it · 2017-08-28T14:45:50.831Z · LW(p) · GW(p)

If you say either 1 or 2 with probability 1/2 each, the probability of escaping is 7/16.

Replies from: Thomas
comment by Thomas · 2017-08-28T14:57:36.808Z · LW(p) · GW(p)

True. You can do it worse than 1/2. Just toss a coin and if it lands head up choose 1, otherwise choose 2.

You can link more numbers this way and it can be even worse.

comment by morganism · 2017-08-31T08:24:45.560Z · LW(p) · GW(p)

The Accidental Elitist- (academic jargon)

https://thebaffler.com/latest/accidental-elitism-alvarez

" there’s a huge difference between jargon as a necessarily difficult tool required for the academic work of tackling difficult concepts, and jargon as something used by tools simply to prove they’re academics."

"confirm your choice to be a so-called academic, to assume it not only as a profession, but an identity, and to wear on yourself the trappings that come with that identity without stopping to wonder how necessary they really are and whether they are actually killing your ability to be and do something better. "

comment by Bound_up · 2017-08-30T15:31:06.048Z · LW(p) · GW(p)

I'm trying to find Alicorn's post, or anywhere else, where it is mentioned that she "hacked herself bisexual."

Replies from: Strangeattractor, jam_brand, jam_brand
comment by Strangeattractor · 2017-08-30T19:47:41.363Z · LW(p) · GW(p)

Do you mean where she hacked herself to become polyamorous? If so, you may be looking for this post http://lesswrong.com/lw/79x/polyhacking/

comment by jam_brand · 2017-08-31T07:26:16.763Z · LW(p) · GW(p)

Here's a post, though not from Alicorn, that has some info that may be of interest: http://lesswrong.com/lw/453/procedural_knowledge_gaps/3i49

comment by jam_brand · 2017-08-31T07:22:08.356Z · LW(p) · GW(p)
comment by torekp · 2017-08-28T22:06:17.689Z · LW(p) · GW(p)

Sean Carroll writes in The Big Picture, p. 380:

The small differences in a person’s brain state that correlate with different bodily actions typically have negligible correlations with the past state of the universe, but they can be correlated with substantially different future evolutions. That's why our best human-sized conception of the world treats the past and future so differently. We remember the past, and our choices affect the future.

I'm especially interested in the first sentence. It sounds highly plausible (if by "past state" we mean past macroscopic state), but can someone sketch the argument for me? Or give references?

For comparison, there are clear explanations available for why memory involves increasing entropy. I don't need anything that formal, but just an informal explanation of why different choices don't reliably correlate to different macroscopic events at lower-entropy (past) times.

Replies from: cousin_it
comment by cousin_it · 2017-08-29T11:15:57.003Z · LW(p) · GW(p)

It doesn't seem to be universally true. For example, a thermostat's action is correlated with past temperature. People are similar to thermostats in some ways, for example upon touching a hot stove you'll quickly withdraw your hand. But we also differ from thermostats in other ways, because small amounts of noise in the brain (or complicated sensitive computations) can lead to large differences in actions. Maybe Carroll is talking about that?

Replies from: torekp
comment by torekp · 2017-08-29T23:16:04.622Z · LW(p) · GW(p)

Good point. But consider the nearest scenarios in which I don't withdraw my hand. Maybe I've made a high-stakes bet that I can stand the pain for a certain period. The brain differences between that me, and the actual me, are pretty subtle from a macroscopic perspective, and they don't change the hot stove, nor any other obvious macroscopic past fact. (Of course by CPT-symmetry they've got to change a whole slew of past microscopic facts, but never mind.) The bet could be written or oral, and against various bettors.

Let's take a Pearl-style perspective on it. Given DO:Keep.hand.there, and keeping other present macroscopic facts fixed, what varies in the macroscopic past?

comment by whpearson · 2017-08-28T13:11:03.351Z · LW(p) · GW(p)

A short story - titled "The end of meaning"

It is propaganda for my improving autonomy work. Not sure it is actually useful in that regard. But it was fun to write and other people here might get a kick out of it.

Tamara blinked her eyes open. The fact she could blink, had eyes and was not in eternal torment filled her with elation. They'd done it! Against all the odds, the singularity had gone well. They'd defeated death, suffering, pain and torment with a single stroke. It was the starting of a new age for mankind, one not ruled by a cruel nature but by a benevolent AI.

Tamara was a bit giddy about the possibilities. She could go paragliding in Jupiter clouds, see super nova explode and finally finish reading infinite jest. But what should she do first? Being a good rationalist Tamara decided to look at the expected utility of each action. No possible action she could take would reduce the suffering of anyone or increase their happiness, because by definition the AI would be maximising those anyway with its super intelligence and human aligned utility maximisation. She must look inside herself for which actions to take.

She had long been a believer in self-perfection and self-improvement. There were many different ways that she might self-improve, would she improve her piano, become an astronomy expert or plumb the depths of understanding her brain so that she could choose to safely improve her inner algorithms. Try as she might she couldn't make a decision between these options. Any of these changes to herself looked as valuable as any other. None of them would improve her lot in life. She should let the AI decide what she should experience to maximise her eudaimonia.

blip

Tamara struggled awake. That was some nightmare she had had about the singularity. Luckily it hadn't occurred yet, she could still fix it and make the most meaningful contribution to the human race's history by stopping death, suffering and pain.

As she went about her day's business solving decision theory problems she was niggled by a possibility. What if the singularity has already happened and she was just in a simulation. It would make sense that the greatest feeling for people would be to solve the worlds greatest problems. If the AI was trying to maximise Tamara's utility, ver might put her in a situation where she could be the most agenty and useful. Which would be just before the singularity. There would have to be enough pain and suffering within the world to motivate Tamara to fix it, and enough in her life to make it feel consistent. If so none of her actions here are meaningful, she is not actually saving humanity.

She should probably continue to try and save humanity, because of indexical uncertainty.

Although if she had this thought her life would be plagued by doubts about whether her life is meaningful or not, so she is probably not in a simulation as her utility is not being maximised. Probably...

Another thought gripped her, what if she couldn't solve the meaningfulness problem from her nightmare? She would be trapped in a loop.

blip

A nightmare within a nightmare, that is the first time this had happened to Tamara for a very long time. Luckily she had solved the meaningfulness problem a long time ago else the thoughts and worries would plague her. We just need to keep humans as capable agents and work on intelligence augmentation. It might seem like a longer shot than a singleton AI requiring people to work together to build a better world, but humans would have a meaningful existence. They would able to solve their own problems, make their own decisions about what to do based upon their goals and also help other people, they would still be agents of their own destiny.

Replies from: RowanE
comment by RowanE · 2017-08-28T22:00:20.966Z · LW(p) · GW(p)

Serves her right for making self-improvement a foremost terminal value even when she knows that's going to be rendered irrelevant, meanwhile the loop I'm stuck in is of the first six hours spent in my catgirl volcano lair.

Replies from: MattG2, whpearson
comment by MattG2 · 2017-08-29T00:23:06.150Z · LW(p) · GW(p)

Is it possible to make something a terminal value? If so, how?

Replies from: RowanE
comment by RowanE · 2017-08-29T11:18:35.992Z · LW(p) · GW(p)

By believing it's important enough that when you come up with a system of values, you label it a terminal one. You might find that you come up with those just by analysing the values you already have and identifying some as terminal goals, but "She had long been a believer in self-perfection and self-improvement" sounds like something one decides to care about.

comment by whpearson · 2017-08-29T17:48:34.341Z · LW(p) · GW(p)

Self-improvement wasn't her terminal value, that was only derived from her utilitarianism, she liked to improve herself and see new vistas because it allowed her to be more efficient in carrying out her goals.

I could have had her spend some time exploring her hedonistic side before looking at what she was becoming (orgasmium) and not liking it from her previous perspective.But the ASI decided that this would scar her mentally and that the two jump as dreams was the best way to get her out of the situation (or I didn't want to have to try to write highly optimised bliss, one of the two).

Replies from: RowanE
comment by RowanE · 2017-09-02T23:13:21.242Z · LW(p) · GW(p)

That's the reason she liked those things in the past, but "acheiving her goals" is redundant, she should have known years in advance about that, so it's clear that she's grown so attached to self-improvement that she sees it as an end in itself. Why else would anyone ever, upon deciding to look inside themselves instead of at expected utility, replace thoughts of paragliding in Jupiter with thoughts of piano lessons?

Hedonism isn't bad, orgasmium is bad because it reduces the complexity of fun to maximising a single number.

I don't want to be upgraded into a "capable agent" and then cast back into the wilderness from whence I came, I'd settle for a one-room apartment with food and internet before that, which as a NEET I can tell you is a long way down from Reedspacer's Lower Bound.