Luke is doing an AMA on Reddit

post by Spurlock · 2012-08-15T17:38:29.542Z · LW · GW · Legacy · 41 comments

Contents

41 comments

I'm sure most of us are used to just being able to badger him about things in the comments here on LW, but for anyone interested here's the link.

41 comments

Comments sorted by top scores.

comment by [deleted] · 2012-08-15T22:43:30.031Z · LW(p) · GW(p)

In contrast to some sibling commenters, I'm glad EY isn't doing an AMA. It sometimes didn't turn out well in the past when people unfamiliar with the sequences tried to ask him questions.

Replies from: Eliezer_Yudkowsky, FiftyTwo, None
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-16T21:44:47.197Z · LW(p) · GW(p)

That was exactly my reaction to reading Luke's AMA - "no, I probably shouldn't try this."

comment by FiftyTwo · 2012-08-16T16:39:21.571Z · LW(p) · GW(p)

What past instances are you referring to?

comment by [deleted] · 2012-08-16T18:36:23.112Z · LW(p) · GW(p)

I wonder if he'd be willing to do an AMA on /r/HPMOR

Replies from: RobertLumley
comment by RobertLumley · 2012-08-16T18:44:49.994Z · LW(p) · GW(p)

I don't think there would be anything to gain by this. Generally speaking good questions about HPMOR get answered either in r/HPMOR or in the LW discussion threads on HPMOR. He would probably ignore bad questions anyway.

comment by siodine · 2012-08-15T18:30:02.770Z · LW(p) · GW(p)

Jesus, those comments are very eye opening; there's a huge inferential distance even between LW/SIers and fellow futurologists. I hope there isn't a similar distance between futurologists and the general public.

Replies from: None, John_Maxwell_IV, buckwheats
comment by [deleted] · 2012-08-15T18:51:55.779Z · LW(p) · GW(p)

There probably is.

Replies from: dbaupp
comment by dbaupp · 2012-08-15T18:58:50.371Z · LW(p) · GW(p)

Possibly larger.

Replies from: FiftyTwo
comment by FiftyTwo · 2012-08-15T21:53:49.364Z · LW(p) · GW(p)

Very definitely, its easy to forget the level of knowledge necessary t work at for this stuff. For example I recently realised that in a room of competitive debaters (college educated well read people) no-one knew what I meant by epistemic uncertainty. And very few philosophers know anything about QM or neurology...

TL;DR Illusion of transparency is a bitch.

Replies from: Wei_Dai, None, J_Taylor, NancyLebovitz, Raemon
comment by Wei Dai (Wei_Dai) · 2012-08-16T05:18:51.043Z · LW(p) · GW(p)

For example I recently realised that in a room of competitive debaters (college educated well read people) no-one knew what I meant by epistemic uncertainty.

Wait, what do you mean by "epistemic uncertainty"? The top Google results for the phrase contrast it with "aleatoric uncertainty" which is so esoteric that it's not even in LW's vocabulary (zero results for "aleatoric" on LW search).

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-16T22:28:57.797Z · LW(p) · GW(p)

"Epistemic uncertainty" sounds like a fancy way of saying "ignorance". "Aleatoric" I think means "stochastic" (the cognate of that word in Italian is not terribly uncommon).

Replies from: fubarobfusco
comment by fubarobfusco · 2012-08-16T23:33:41.446Z · LW(p) · GW(p)

Wikipedia says:

Aleatoric uncertainty, aka statistical uncertainty, which is unknowns that differ each time we run the same experiment. For an example of simulating the take-off of an airplane, even if we could exactly control the wind speeds along the run way, if we let 10 planes of the same make start their trajectories would still differ due to fabrication differences. Similarly, if all we knew is that the average wind speed is the same, letting the same plane start 10 times would still yield different trajectories because we do not know the exact wind speed at every point of the runway, only its average. Aleatoric uncertainties are therefore something an experimenter cannot do anything about: they exist, and they cannot be suppressed by more accurate measurements.
Epistemic uncertainty, aka systematic uncertainty, which is due to things we could in principle know but don't in practice. This may be because we have not measured a quantity sufficiently accurately, or because our model neglects certain effects, or because particular data are deliberately hidden.

http://en.wikipedia.org/wiki/Uncertainty_quantification

Replies from: tim, Miller
comment by tim · 2012-08-17T01:03:18.532Z · LW(p) · GW(p)

Could we say that aleatoric uncertainty would be akin to not knowing whether a coin will land heads or tails (but we know the odds are 1:1) and epistemic uncertainty would be akin to not knowing the odds of the coin at all?

Replies from: Vaniver, fubarobfusco
comment by Vaniver · 2012-08-21T21:38:35.670Z · LW(p) · GW(p)

Aleatoric uncertainty is basically seeing randomness as a property of the universe, rather than a property of minds. Unless you verge into quantum territory, basically all randomness is actually epistemic uncertainty, and even if you verge into quantum territory, you can view quantum randomness as epistemic uncertainty.

Bayesians are comfortable viewing all uncertainties as epistemic. Non-Bayesians aren't, and all of the people I know who do professional decision-making under uncertainty dread someone even mentioning aleatoric uncertainty because it's a dead giveaway that the person mentioning it isn't Bayesian, and thus a long, unproductive philosophical discussion may be necessary before they can get anywhere.

comment by fubarobfusco · 2012-08-17T02:18:16.980Z · LW(p) · GW(p)

The Wikipedia definition makes it sound more like aleatoric uncertainty is not knowing whether it will land heads or tails (because it will do something different each time), and epistemic uncertainty is not having a camera accurate enough to see whether it has landed heads or tails.

Replies from: jswan
comment by jswan · 2012-08-21T21:04:54.379Z · LW(p) · GW(p)

I realize that LW collectively doesn't like unreferenced definitions, but in this case maybe it's OK... a friend of mine whose PhD is in decision theory explained aleatory uncertainty to me as the uncertainty of chance with known parameters: if you roll a normal six-sided die, you know it's going to come up with a value in the range 1-6, but you don't know what it will be. There's no chance it will come up 7. Epistemic uncertainty is the uncertainty of chance with unknown parameters: there may not be enough data to know the bounds of an event, or it may have such large and random bounds that trying to place them is not very meaningful.

comment by Miller · 2012-08-17T02:41:28.776Z · LW(p) · GW(p)

You could probably mad words any two buzz words together though. How about quantum rationality?

comment by [deleted] · 2012-08-16T05:13:23.353Z · LW(p) · GW(p)

epistemic uncertainty

I find myself in the embarrassing position of not knowing what that term refers to...

EDIT A few upvotes but no definitions. In case it wasn't clear, can someone tell me what "epistemic uncertainty" means, if it is a thing.

Replies from: Suryc11, shminux
comment by Suryc11 · 2012-08-16T09:02:31.715Z · LW(p) · GW(p)

Isn't it simply the extent to which one is not certain about some (piece of) knowledge? At least that was my intuition when I first read that.

After googling, the closest definition I could find was on Wikipedia under systemic uncertainty--in contrast to statistical uncertainty (aleatoric uncertainty) apparently.

comment by Shmi (shminux) · 2012-08-16T05:48:06.177Z · LW(p) · GW(p)

welcome to the club!

comment by J_Taylor · 2012-08-16T23:58:52.441Z · LW(p) · GW(p)

very few philosophers know anything about QM or neurology

Very few philosophers need to know anything about QM or neurology.

Replies from: loup-vaillant
comment by loup-vaillant · 2012-08-17T13:35:17.021Z · LW(p) · GW(p)

QM potentially answers cool philosophical questions like, "does cut & paste transportation preserves identity" (it looks like it does, for our universe doesn't seem to encode any identity at all).

Neurology will most probably tell us nearly everything we will ever know about how humans actually work. I expect many questions formerly considered "philosophical" will be answered by this piece of science.

Therefore, I think nearly all philosophers need to know some QM and neurology.

Replies from: None, J_Taylor
comment by [deleted] · 2012-08-17T13:53:56.196Z · LW(p) · GW(p)

Therefore, I think nearly all philosophers need to know some QM and neurology.

The question is whether knowing a little QM and neurology is more or less harmful than knowing none at all.

Replies from: FiftyTwo
comment by FiftyTwo · 2012-08-18T01:20:17.061Z · LW(p) · GW(p)

Nothing can protect you from people who fail to apply their knowledge well. Partial knowledge at least makes them aware that there is more to learn.

comment by J_Taylor · 2012-08-17T23:42:26.465Z · LW(p) · GW(p)

I agree with your first statement.

However, as for your second statement, I would really like an example, because I am not entirely sure what you mean. (I am sincerely requesting examples.)

Unfortunately, I strongly disagree with your third statement. The time it would take to learn QM with sufficient rigor to be interesting could be better spent reading the findings of experimental psychology or learning more mathematics. For the majority of philosophers, their subject matter simply does not overlap with QM in such a way that knowing rigorous QM would help them.

Further, I agree with what paper-machine seemed to imply in their post. A little QM can make a philosopher stupid.

Of course, in certain subjects, knowing QM or neurology should be mandatory.

Replies from: FiftyTwo
comment by FiftyTwo · 2012-08-18T01:17:14.959Z · LW(p) · GW(p)

However, as for your second statement, I would really like an example, because I am not entirely sure what you mean. (I am sincerely requesting examples.)

Few quick examples:

  • A lot of philosophy of mind assumes there is a singular unified self, whereas neurology might lead you to think of the mind as a group of systems, and this could resolve some dilmnas.

  • Lots of traditional moral theories assume people make choices in certain ways not backed by observation of their brains.

  • Your willingness to accept materialist explanations for the mind probably increases exponentially the more you know about the mechanics of the brain. (Are the any dualist neuroscientists?)

  • A lot of philosophy uses 'armchair' reflection and introspection to get foundational intuitions and make judgements. Knowing the hardware you're running that on is probably helpful. (E.g. showing how easy it is to trigger people's intuitions one way or the other changed the debate about Gettier cases massively.)

Replies from: J_Taylor
comment by J_Taylor · 2012-08-18T20:58:15.742Z · LW(p) · GW(p)

I see and concede. I had been thinking at an excessively low-level.

comment by NancyLebovitz · 2012-08-16T05:47:57.135Z · LW(p) · GW(p)

Do you mean they weren't familiar with the phrase "epistemic uncertainty" or they didn't know the concept?

Replies from: FiftyTwo
comment by FiftyTwo · 2012-08-16T16:30:26.198Z · LW(p) · GW(p)

The phrase. In context the argument I was making wasn't that complicated (uncertainty of moral status of fetus), but the inferential gap was in not realising that the phrasing I found natural was fairly incomprehensible.

comment by Raemon · 2012-08-16T04:07:28.630Z · LW(p) · GW(p)

If you need to do TL;DR for a single paragraph...

Dunno. Feels like there's some kind of joke opportunity here for inferential distance but I can't quite nail it.

Replies from: FiftyTwo
comment by FiftyTwo · 2012-08-16T16:33:49.627Z · LW(p) · GW(p)

The TL;DR was mainly for the purposes of humour in this instance rather than actual ease of reading. It also seems a generally useful thing to be reminded of.

comment by John_Maxwell (John_Maxwell_IV) · 2012-08-16T04:43:22.593Z · LW(p) · GW(p)

Well remember, there are probably lots of people coming from /r/IAmA and leaving questions.

comment by buckwheats · 2012-08-17T23:33:22.552Z · LW(p) · GW(p)

The AMA may have received comments form curious people outside of r/futurology since there was an announcement for it on the front page. One thing about r/futurology, too, is that it recently tripled in size - only a few months ago it has around 6k subscribers. A lot of the growth came a week or two ago from a thread featured on r/bestof that got a lot of attention. Those things probably contributed to the inferential distance... If the AMA had happened a few months ago it may have been less, or indeed if it had happened a few months from now, counting on there being significant attrition of those new subscribers.

comment by Suryc11 · 2012-08-15T21:15:32.483Z · LW(p) · GW(p)

I was very pleasantly surprised to see the AMA announcement on Reddit's frontpage, given how relatively non-mainstream the S.I. is and how many page views Reddit gets (and gives).

Also, although there is a large inferential distance between Luke and most Redditors (as siodine noted), I thought Luke did a great job trying to bridge the intuition gap--with the usual abundance of links and all.

comment by Emile · 2012-08-17T08:14:07.051Z · LW(p) · GW(p)

The actual link is here.

comment by FiftyTwo · 2012-08-15T21:48:58.494Z · LW(p) · GW(p)

Although in theory we can badger Luke whenever we like, its nice to have a socially approved opportunity to ask 'stupid' or off topic questions.

comment by SilasBarta · 2012-08-15T22:35:45.824Z · LW(p) · GW(p)

I'm sure most of us are used to just being able to badger him about things in the comments

Huh? I'm not. In my case, he either pretends he doesn't see my stuff, never again loads the page where it was made, or answers with a non-specific citation.

comment by RobertLumley · 2012-08-15T22:09:42.850Z · LW(p) · GW(p)

This is such low hanging fruit that I'm embarrassed it never occurred to me before. Props to Luke for doing this. One by EY might be worth the time as well, especially given how popular HPMOR is on Reddit.

Replies from: RobertLumley
comment by TylerJay · 2012-08-16T00:02:58.840Z · LW(p) · GW(p)

Well done

comment by Joshua Hobbes (Locke) · 2012-08-15T21:31:15.911Z · LW(p) · GW(p)

Why is it just Luke doing the AMA? Eliezer already has an account for HPMOR, after all.